I need help figuring out a way to apply a transformation matrix I computed in opencv to a webcam in touch.
I’m looking to recreate warpPerspective from opencv.
You can use GLSL TOP to do this. The equation in the documentation can be done in GLSL easily. I think the x and y values in Opencv are in pixel space so youll need to convert vUV from normalized texture coordinates to pixels, apply the transform, then convert back to normalized coordinates before your texture() call.
I’m looking for the same kind of help, please let me know what you are able to come up with.
FIRST: Thanks malcolm for the advise, I will start researching now. It will take me a while because it will be the first glsl anything I will have written.
SECOND: I brought my 3^2 matrix into touch and then ran modified code I found @ [stackoverflow.com/questions/1542 … -c-sharp:(](windows 8 - How to get rotation, translation, shear from a 3x3 Homography matrix in c# - Stack Overflow Although it does not deal with row 3, in my matrix it is not 0,0,1.
1.1924793003853627e+000, 8.3333289622087331e-002, -1.1465278686521209e+002 1.4951991216002472e-002, 1.2357355367558822e+000, -1.9819963149868363e+000
6.1308661346706718e-005, 1.0139734772951642e-004, 1
def getComponents(normalised_homography):
‘’‘((translationx, translationy), rotation, (scalex, scaley), shear)’‘’
a = normalised_homography[0,0]
b = normalised_homography[0,1]
c = normalised_homography[0,2]
d = normalised_homography[1,0]
e = normalised_homography[1,1]
f = normalised_homography[1,2]
p = math.sqrt(aa + bb)
r = (ae - bd)/(p)
q = (ad+be)/(ae - bd)
translation = (c,f)
scale = (p,r)
shear = q
theta = math.atan2(b,a)
return (translation, theta, scale, shear)
I applied these to a transform top, almost everything was perfect except the x scale…
Also, I tried taking the corner points of the original image applying the H transformation to them in opencv, output the transformed corner points TD xyz space to a cornerpin top, same results, bad x scaling… although opencv reported changes in the Mat corner point z coords and I did not know how to apply that to a top, only GEOs.
NOTE:
The original H matrix is derived from points in two different images( (1280,800), ( 1280,960)). I am trying to transform the 1280x960 image to match the x800 image after the transformation. However, when I run warpPerspective in opencv and use 1280x800 as my mat out size it works perfectly.
THOUGHTS?
Personally, I think I need to learn all about this “normalize”. Recommendations for a good glsl learning resource?
First attempt at GLSL FAIL:
I tired the code below in textDat to a glslTOP with some uniforms from my matrix.
uniform vec3 MatRow0;
uniform vec3 MatRow1;
uniform vec3 MatRow2;
uniform vec3 rezol;
layout(location = 0) out vec4 fragColor;
void main()
{
mat3 homography = mat3(
MatRow0[0], MatRow1[0], MatRow2[0],
MatRow0[1], MatRow1[1], MatRow2[1],
MatRow0[2], MatRow1[2], MatRow2[2]
);
//transfterFrom normalized to Ocv coords
//resolution, but OCV's origin is at (0, inputTexture.yMax)?)
vec3 OCV = vec3(vUV.s*rezol[0], vUV.t*rezol[1], 1.0);
vec3 newPtOcv = OCV * homography;
vec3 OCVback = vec3(newPtOcv[0]/rezol[0], newPtOcv[1]/rezol[1], 1.0);
fragColor = texture(sTD2DInputs[0], OCVback.st);
}
RESULT:
A smaller version, like the inverse of what I wanted to do. Instead of grabbing pixel values, I need to grab vertex locations and transplant those into my new texture, I think that will stretch my input texture, instead of shrinking it.
Have you come up with anything bLackburst?
The mat3() constructor takes values column by column, so your constructor is correct.
However OCV * homography is actually multiplying OCV by the transpose of homography. You probably want homography * OCV.
Switching to homography * OCV centered the image, but scaling is off. I am trying to go from this: original.tif (1.46 MB)
To This:
(when projected out of the projector it is perfectly aligned to the blue plane.)
ocv.tif (1.53 MB)
But in TD with GLSL I get this:
glsl.tif (1.1 MB)
Using the planar homography matrix:
1.1924793003853627e+000 8.3333289622087331e-002 -1.1465278686521209e+002
1.4951991216002472e-002 1.2357355367558822e+000 -1.9819963149868363e+000
6.1308661346706718e-005 1.0139734772951642e-004 1.
I then tried creating a vertex shader:
out vec3 texCoord;
uniform vec3 MatRow0;
uniform vec3 MatRow1;
uniform vec3 MatRow2;
uniform vec3 rezol;
void main()
{
mat3 homography = mat3(
MatRow0[0], MatRow1[0], MatRow2[0],
MatRow0[1], MatRow1[1], MatRow2[1],
MatRow0[2], MatRow1[2], MatRow2[2]
);
vec3 OCV = vec3(P.x*rezol[0], P.y*rezol[1], 1.0);
vec3 newPtOcv = homography*OCV;
vec3 OCVback = vec3(newPtOcv[0]/rezol[0], newPtOcv[1]/rezol[1], 1.0);
gl_Position = vec4 (OCVback, 1.0);
texCoord = uv[0];
}
With pixelShader:
layout(location = 0) out vec4 fragColor;
in vec3 texCoord;
void main()
{
fragColor = texture(sTD2DInputs[0], texCoord.xy);
}
With this result
vertexS.tif (2.1 MB)
The images aren’t showing up, but you can download them…