iPad Multitouch Input

Hi,

I am trying to implement my ipad as the multi touch input surface for a touch designer application. This is for testing only. I am working on a project that will use a large format multitouch surface but need to use the ipad for development purposes.

I have created a custom ipad app that outputs osc and I’ve got the data coming in, the touches show up in the scene, however, when I have more than one touch it does not register them simultaneously. Instead it switches between touches when the values are updated.

Is there a way that I can mimic a normal multitouch input, where it waits for a touch_up message?

I am running Touch Designer 088 64bit build 25880, on Windows 7.

Thanks!

What are you using to catch and parse the data in TouchDesigner? Do you want to upload your patch?

As well do you know which large format multitouch surface you’re going to be using?

Unfortunately I cannot post the patch.

I have a DAT oscin module that is receiving osc from the ipad. I then parse out the the x,y coordinates with a DAT select module.

I guess what I want to do is parse the incoming osc and format it like the DAT multitouchin module. Not sure if that is possible.

The patch is setup to already work with the large form at touch display, I believe it is called AirScan. That isn’t really the problem. The problem is that while developing my side of stuff I don’t have access to that system and need some kind of multitouch input while developing. I have an iPad and the skillset to create my own apps with osc/tuio etc.

Any recommendations welcome.

You could make something like that, I’m inclined to think you’re almost better off just getting a cheap touch screen monitor and using it as your testing unit. Otherwise you can use Python to parse it all out. How are you Python chops?

We did a similar project using CCV on a touch table and we parsed the incoming message into each individual point, then had a table that kept track of how many points there were, and replicated a small chain of operators and scripts that took the CCV data (which was basically just id, x, y) and parsed them into rows of a table. With CCV you just have to keep track of the id’s over time and when they disappear from your incoming stream you know the touch event is over, at which point you would clear the replicated component you made to parse the touch point out. Most of the Python script was in a UPD In DAT that was being executed every frame because CCV was sending data even if there were no touch points, not sure how your app functions. If you app is similar you can add your script to the OSC In DAT, if not, might be useful to have an Execute DAT running every frame in parallel to the OSC In DAT.

But ya, if you have solid Python chops, do the second and save some money, otherwise save your time and buy a cheaper touch screen you can use for dev-ing with the multitouch in dat if that’s your end goal.

This is a bit old school, using tscript, but check out the vclick command:

[url]http://www.derivative.ca/wiki088/index.php?title=Vclick_Command[/url]

It’ll let you do a virtual click with identifiers so multiple down-drag-up sequences at the same time is possible. You can format the incoming data as vclick commands and execute them as they come. Hope this helps.

The python version of vclick is the .interact() method on Panel COMPs.