New experimental Lens Distort TOP

Hi,
I am very happy to see new Lens Distort TOP in experimental series, great work. :slightly_smiling_face: I have recently done undistort in GLSL and therefore I have some ideas on how to possibly push this implementation even further to better suite various applications. Here are my two cents:

  • It would be great if Lens Distort TOP would have support also for newCameraMatrix. This way it would be possible to adjust scaling and positioning based on new camera matrix from getOptimalNewCameraMatrix().
  • I believe it would be best if Lens Distort TOP would always preserve native resolution of input texture. That means - if input has 2K resolution and Lens Distort TOP is set to 4K resolution, input is first scaled down to match its native resolution and then undistorted / redistorted. This way you can precisely specify ROI (that could be also generated by opencv getOptimalNewCameraMatrix()).
  • In my opinion the last part needed for complete control over undistort workflow is ability to specify custom scale and transform values. With these options it is possible to perform all sorts of undistort / redistort worklows.

To describe why would you want to have these types of controls, I might name a couple of workflows in following examples:

  1. Classic VFX workflow
    You undistort input camera, but you need to retain all original information (without loosing data around the corners). It is definitely not desirable to scale down undistorted image to fit original frame (since you would loose data). Therefore you just increase your resolution - so that stretched corners are still part of your undistorted image. You can then composite 3D render and redistort the image. This time you would lower your resolution to get back to original format.

  2. Fixed format workflow
    You undistort input camera, but it is not desirable to change the resolution. However you would like to keep as much data as possible, therefore you would use newCameraMatrix (from getOptimalNewCameraMatrix()) to properly scale input image.

  3. Another fixed format workflow
    You do steps described above but you realize getOptimalNewCameraMatrix() changes your aspect ratio when alpha=0. That might not be desirable. Therefore you might decide to use alpha=1. This would scale down your image (in case of barrel distortion) in order to completely fit it inside of original image format while keeping its aspect ratio fixed. This is not exactly what you wanted, but now you can use ROI to either crop it, or calculate our own scale and transform (based on ROI and original image format) that would essentially scale up your image back to perfectly fill original format.

I hope these features could be implemented as I wouldn’t have to maintain my GLSL undistort :smiley: (I am using it for now as it has these features). I believe it isn’t too hard to implement, and it could open up a lot of possibilities for various use cases. :slightly_smiling_face: Thank you very much for creating Lens Distort TOP.

2 Likes

Thanks for the feedback. It’s really useful to hear how you would use these OPs - particularly while they are still in the experimental phase.

I’m not familiar with the getOptimalNewCameraMatrix function in opencv, so I’ll need to do a little research there, but I had been wondering before about how to potentially implement a scaling feature.

With our Stype TOP that handles lens distortion for that system, we have a Padding value that sounds a little similar to what you’re describing. However, it only crops out the center of the image after the distortion is applied and does not change the resolution.

If I understand it, we’re potentially looking at a scale menu that lets you choose between options like optimal or custom scale/transform, and a toggle to either adjust the texture resolution according to the distortion or preserve the input resolution?

1 Like

If I understand it, we’re potentially looking at a scale menu that lets you choose between options like optimal or custom scale/transform, and a toggle to either adjust the texture resolution according to the distortion or preserve the input resolution?

Yes, something like that, but I am not sure if such menu would allow for enough level of control as there are some scenarios where you might want to let user control these parameters. But on the other hand if you would cover also such scenarios it might be quite easy to operate.

In fact it might be theoretically quite cool if getOptimalNewCameraMatrix could be run directly inside of Lens Distort TOP (based on camera matrix, distortion coefficients and specified resolution). That could possibly work quite nicely - especially for variable camera matrices and distortions, but I haven’t tested this exact setup before.
However there would have to be some options on how to apply new camera matrix and ROI as user won’t have direct control over their adjustments. By adjustments I mean for example something like this

  • possibility not to move center of image with new camera matrix
  • possibility not to scale down image with new camera matrix - apply it just in case it needs to be scaled up (pincushion distortion)
  • possibility to keep aspect ratio untouched
  • possibility to read these calculated (and adjusted) values from info chop or as python attributes
1 Like

Thanks for the further explanation and examples. I think I’ve got a better understanding of the workflow now.

My preference is to have options to both run getOptimalNewCameraMatrix internally as well as letting users enter their own new camera matrix and roi values. The resolution menu will have multiple options including taking the resolution from the optimal calculations, and using the custom resolution to increase the canvas size while maintaining the native resolution of the input.

I’ll post a test version when I’ve got something and you can let me know if it looks like it will handle your use cases.

1 Like

Oh, that’s great! I am trying to figure out how to revive the augemented reality stuff I did in 2008 (touch077?). I was rendering onto a rectangle grid then bending the points around to compensate for the lens. This looks much more civilised. :slight_smile:

I recall that the old ARToolkit has a utility that stares at a chequerboard from several different angles to get the lens distortion parameters.

We’re working on a new tool that works the same way … it uses the opencv calibrate camera function

https://docs.opencv.org/master/dc/dbb/tutorial_py_calibration.html

I’ve got the new LensDistort TOP features in now. We’re just doing some final testing and documentation, but they should be available in the next experimental release shortly. The screenshot below shows what the new Layout parameter page looks like as well as some basic examples of what it can do.

It was a little awkward trying to get all of the options to work together smoothly, but I think this should let you do everything. Under the hood it uses the opencv function getOptimalNewCamera matrix to calculate the best new scaling/offset values to fit the undistorted image, but you can also use your own custom values for transforming the image or cropping. The optimal values and the region of interest relative to the current transform are all available in the info chop.

There is also a Native Resolution layout mode that will maintain the resolution of the input image regardless of the overall image size.

2 Likes

Great, I am looking forward to playing with this new setup :slightly_smiling_face:
By looking at the image I guess you have essentially converted data from getOptimalNewCameraMatrix to scale + transform values, right? That’s smart - much easier to look at.
I assume following equations for focal length would be therefore valid?

original_fx * scalex = new_fx
original_fy * scaley = new_fy

I guess something similar would also apply for the principal point, right? (I can see there is R for units, but I am not sure what it represents, I am just guessing here.)

original_cx * transformx = new_cx
original_cy * transformy = new_cy

Great to see optimal values in the info chop. With them it is possible for user to afterwards re-construct new camera intrinsics (in case the equations above are valid).

Yeah, I was trying to map the opencv features to something a little more general, but I’ll admit that, while all of the functionality is there, I’m not entirely satisfied with the interface.

Interestingly, it doesn’t actually work like you’re describing right now, but I may like your idea better.

In the current system, the new Post Transform parameters are actually direct translations of opencv’s new camera matrix. I did it this way so that you wouldn’t need to do the math you described and could use the values directly. However, it does mean the new scale/center parameters work inversely to the original center/focal length values which feels like it will be confusing.

I might take another look at it and consider applying the new transform as a modifier on the original C/F values rather than as separate values.

I’ve added new unit menus to replace the general ‘Camera Matrix Units’ menu from the current version. This is so that you can entering anything as either pixels or normalized/fraction values. You can also choose whether the center values are given in absolute coordinates (measured from the bottom-left), or relative to the center of the image.

The ‘R’ you noticed is for relative normalized units, so the center point is given as -0.5 to 0.5 relative to the center of the image. For opencv-based data, you’ll probably want to use the absolute-pixels units where the center is given as a pixel position measured from the bottom-left.

I see, this sounds good too - I guess it would be fine either way - both direct and modifier approach would work, the only difference might be in user experience as you mentioned. But I guess it would be just a matter of getting used to it. :slightly_smiling_face:

Great, thank you very much, such unit menus will definitely come handy.

If I am not mistaken, opencv uses top-left origin (as could be seen on this image), right? However I guess you have flipped y axis for easy manipulation (which is great in my opinion).

Yes, it’s not an exact translation because OpenCV uses the top-left position as 0,0 where as TouchDesigner uses the bottom-left like OpenGL. We try to keep things consistent inside TouchDesigner, so internally I’m flipping the coordinates before calling the opencv functions.

1 Like