Thank you Rob. I currently don’t have a camera in the home office, but I found a calibration file for this camera from aximmetry. It holds a dict of 8 entries, with two keys (“Zoom” and “Focus”) for each matching FocalLength. Unfortunately I don’t understand how to use two dict keys (freeD CHOP channels) to do the lookup

And according to the manual (freed section)

Camera Zoom:
magnification of optical zoom of this unit between 000555h (WIDE) and 000FFFh (TELE): 1365 → 4095 Camera Focus:
value of focus position of this unit between 000555h (NEAR) and 000FFFh (FAR): 1365 → 4095 Spare:
value of the iris position of this unit between 0555h (CLOSE) and 0FFFh (OPEN): 1365 → 4095

Generally speaking, if you’ve got two keys and your data points are in a regular grid (focus and zoom are sampled at regular intervals) then you can use bilinear interpolation (Bilinear interpolation - Wikipedia) to blend between the data points.

However, if your data points are more irregular (as I suspect they are), you could try Delaunay triangulation (Delaunay triangulation - Wikipedia) instead which will let you interpolate between a set of random points in any number of dimensions. With 2 keys, this is basically dividing the space into triangles and then for any given focus/zoom combination you can find the 3 closest calibrations and blend between them based on how close they are to your key. Some python code here: scipy.spatial.Delaunay — SciPy v1.10.1 Manual

@snaut is actually working on calibration tools that use delaunay triangulation to interpolate calibration data in 3 dimensions (x, y, z) and might be able to point you to some useful libraries.

not quite sure if this is the correct approach but it seemed logical so… i did just go with scipy at the moment to build the initial Delaunay cells. The find_simplex() function will return the cell you are currently located in and it’s returned value can be used to find the cells corners. From here you can get the distance and start looking at a weighted average between them.

for a visualization using a scriptSOP and a bunch of random points as input:

# me - this DAT
# scriptOp - the OP which is cooking
# press 'Setup Parameters' in the OP to call this function to re-create the parameters
def onSetupParameters(scriptOp):
page = scriptOp.appendCustomPage('Position')
p = page.appendXYZ('Pos', label='Camera Position')
return
# called whenever custom pulse parameter is pushed
def onPulse(par):
return
from scipy.spatial import Delaunay
import numpy as np
def onCook(scriptOp):
scriptOp.clear()
input = scriptOp.inputs[0]
delaunay = []
for i in input.points:
myP = scriptOp.appendPoint()
myP.P = i.P
delaunay.append([i.P.x, i.P.y])
tri = Delaunay(delaunay)
allTriangles = tri.simplices
camP = np.array([(scriptOp.par.Posx, scriptOp.par.Posy)])
inArea = tri.find_simplex(camP)
locatedIn = allTriangles[inArea]
# add in camera position
camPosP = scriptOp.appendPoint()
camPosP.P = (scriptOp.par.Posx, scriptOp.par.Posy, scriptOp.par.Posz)
for i in locatedIn[0]:
connectTo = scriptOp.points[i]
line = scriptOp.appendPoly(2, closed=False, addPoints=False)
line[0].point = camPosP
line[1].point = connectTo
return

thank you @snaut and @robmc. I have to admit this is quite a bit above my head. When I purchased a cam with free-d support I was expecting things would be plug n play.

I appreciate your input and example, but I will need some time to figure out how this applies to the xml

I wonder if you could calculate the FOV of the lens as you have the sensor width given in the file as well: α = 2 * arctan[l/(2f)] where l is the sensor width and f is the focallength.

The values for zoom and focus in the file I would interpret as normalized encoder positions…

In general I’m not sure this is a 2D lookup. Theoretically given a lens with focus set to infinity, you should get the focuslength which is equivalent to the focuslength specified on the lens. The focus then is a slight adjustment to that focuslength following the formula: 1/s1 + 1/s2 = 1/f where s1 is the distance to the object and s2 the rear nodal point of the lens which I would interpret as the actual focallength. Following this article here on wikipedia:

So my assumption would be that for each zoom level you should at a minimum have a focallength measurement for closest and furthest away focus setting - with that it’s a 2 stage lookup: First for the zoom level which gives you a focus lookup and next a lookup on that curve (with only 2 measurments it would be linear).

Thanks for your second file, you helped me a lot with understanding the protocol after learning a bit more about python! All this is rather new to me. My plan was to be able to send out freed data just to test some Unreal projects without actually having a tracking system in my office. I decided against using Touchdesigner for this for now and will utilise Widget Designer with an extension, though.

The FreeD Out CHOP will be part of our next experimental series, but we don’t have a release date for that yet. In the meantime, we do have a custom build based on an earlier 2021 release that you’re welcome to try here: Dropbox - File Deleted - Simplify your life

Sorry, i’m an end user I’m not a programmer.
but I found this tool, (“github DOT com/max-verem/VRPN-FreeD-OpenVR”)
it might help you to figure out.
He transform SteamVR data to freeD and transmit the data via UDP.

The FreeD CHOP is available in the current 2021 release and it will receive FreeD UDP data and make it available as chop channels. There’s more information on our wiki here: https://docs.derivative.ca/FreeD_CHOP

There is also a FreeD Out CHOP coming in our next release that will send data out from TouchDesigner to other applications that recognize the FreeD format.

Let me know if there’s something else you’re looking for.