Exporting a CamSchnappr-Calibrated Camera to Blender – Projection Mismatch

Hi all,
I’m currently working on a real-world projection mapping setup using TouchDesigner and Blender. The goal is to calibrate a physical projector and camera using CamSchnappr, then export the resulting camera to Blender for rendering — so that the Blender-rendered image can be reprojected onto a real object with perfect alignment.

Here’s the current workflow:
I calibrate the camera in TouchDesigner using CamSchnappr.
I export the resulting camera’s world transform matrix (extrinsics) and also the intrinsics (CamSchnappr’s custom projection matrix).
I import the camera into Blender using Python — position and orientation match perfectly.
I reconstruct the focal length in Blender based on the intrinsics (e.g., fx from the projection matrix → FOV → lens mm).
I render the image in Blender.
I load this image in TouchDesigner and project it back onto the physical object using the calibrated projector.

The problem:
Even though the camera’s world position, orientation, and focal length are correctly reconstructed in Blender, the rendered image does not align with what the physical projector (via CamSchnappr) would produce.

I confirmed:
The camera transform in Blender is exactly what CamSchnappr reports (tested by exporting and visual comparison).
The focal length calculated from fx (e.g. fx = 1.37668 → ~40° FOV → lens ≈ 49.45 mm) is correct in theory.
But: The rendered image is visibly shifted and scaled, even though the camera settings match.

Here’s the key insight:
I can perfectly match the Blender image to the TouchDesigner render (from the CamSchnappr camera) using only a 2D linear transform (scale + translate).
In my tests, the necessary correction is often a uniform scale (e.g. 0.5) and a simple XY offset — no perspective distortion, no rotation.

My Question:
What exactly does CamSchnappr’s projection matrix represent?
Is it a normalized (NDC) projection matrix?
Does it assume any virtual sensor size?
Are fx, fy, cx, cy in NDC units or pixel space?
Is the projection matrix symmetric or does it include skew/offset behavior that standard 3D packages (like Blender) cannot easily reproduce?

I’d love to understand how CamSchnappr internally maps camera intrinsics to its rendering pipeline — especially if there’s a documented way to replicate this behavior in external 3D tools.

Ultimately, I’m looking for a clean and accurate way to export a CamSchnappr-calibrated camera to Blender, including the projection behavior, not just transform.

Thanks in advance for any insight!
I’d be happy to share example matrices, renders, or .toe files if needed.

One remark: Exporting a ‘normal touch camera’ to blender works for me. Its only
the camschnappr camera, that has those problems in my environment