TD doesn’t provide a video / canvas element. TD is pushing a / many tracks over an RTCPeerConnection.
An RTCPeerConnection gets in a connected state when the connection is established following the signaling process: when it went through the exchange of offers / answers / candidates.
In JS, you would first setup your own signaling client to exchange signaling messages with TouchDesigner. Using WebSocket in our case if you stick to TD’s own signaling server.
All the messages are documented in the Wiki and JSON Schema is available to test your messages.
Once your signaling client is running, you can establish an RTCPeerConnection.
As soon as the state of your RTCPeerConnection is in a “connected” state, a track event should fire. From the track event, you can get the WebRTC Track to add to a MediaStream, and push that MediaStream to the
srcObject of a video element.
Having the palette tools running in TD is fairly easy.
You don’t need multiple computers or even multiple instances running.
Drag n drop the signalingServer from the palette - > Change its Active par to ON.
Drag n drop 2 signalingClient-s from the palette → Change their Active par to ON. Change their Forward to Subscribers par to ON.
Drag n drop 2 WebRTC COMP-s from the palette → Assign the first signalingClient to the first WebRTC COMP, repeat with second.
Now if you look at the signalingServer, you’ll see 2 signalingClients showing up.
On each signalingClient, you will see the other signalingClient.
On each WebRTC COMP, you will see the other client (the client that represents a signalingClient which is not the one assigned to the WebRTC COMP you are looking at)
You just have to go in active mode to start a call.
With all that in mind, what you have to achieve in your web page / project is for a client to show up in the signalingServer, for the WebRTC COMP to be able to call it (start an RTCPeerConnection and make an offer).
I hope this helps and get you going,