Here is a download link for a .tox file which outputs a perfectly scaled Kinect Point Cloud. This means you can line it up perfectly in 3D modeling space as well.
It is programmed using GLSL and uses some numbers I found on the Microsoft Kinect forum where an engineer was discussing how they arrive at the real world skeleton point output (as opposed to the uv/image space coordinates).
That’s great, I was wondering how to reproduce the depth with color sample from the sdk.
Where did you find the values you put in the transform2 to match RGB to depth?
Trial and error or are those hidden somewhere in the docs?
I am guessing that since its 6 year old post, it won’t work well with current TouchDesigner. Have you tried the Kinect Point Cloud component in the palette? It does a similar thing and has been kept up to date.