It came up during last week’s live TouchDesigner stream that the engine COMP was initially conceived as a solution to having to run audio processes on seperate instances of Touch. I was a bit surprised given that it’s one of the main things that doesn’t seem to work at all for the moment. I had assumed it was simply low-priority or not envisioned for this purpose.
I would be absolutely overjoyed if we can figure out what the issues are and make the engine COMP an awesome workflow component.
There are a few issues I’ve encountered with the primary one being that while the engine COMP will output timesliced channels at the correct rate and length all the samples in that channel will be identical.
Next up, when using a more complex audio network which relies on external midi input (which might be relegated to the .tox loaded inside the engine COMP in the future) I get the error: “insufficient samples were provided for rendering” along with the same bug as above.
Notice also that the outputs have changed order. This will surely get annoying when switching from the main COMP to the engine COMP because it requires you to remember two layouts one of which is totally arbitrary.
Next up are more aesthetic complaints I suppose but ones which I feel greatly impact ease of use. The “menu” parameter types do not carry over to the engine COMP, nor do sections and nor do connector colours.
So far these are all the issues that strike me upon first use of the enging COMP and have stopped me from considering it at all for the moment. Let me know if there’s anything else I could check or change or maybe some documentation I haven’t seen for these sort of issues.
Let me know what you think!
Thanks for your time,
Owen
Ah, when I was talking about it being used for audio (although we havn’t spent much time on it yet), the thought was that all of the audio work would be done within the Engine COMP. If you are pipping audio into the Engine COMP, although it should work eventually when it’s fleshed out more, it really isn’t giving you much benefit. Since your master file can still drop frames and thus dump audio being fed into the Engine COMP, you don’t gain the benefit of having things in another process.
The main benefit occurs when all of your audio work is done within the Engine, with just triggers/adjustments being sent in, not audio. That way if your master file which is doing graphics drops a bunch of frames, the audio doesn’t get affected.
Hi Malcom, I caught that part of the conversation but didn’t want to derail the stream more. All the audio IS being done in the engine COMP. There is nothing being piped in or out except for that midi CHOP in the second example.
I can send you some files if you want to have a look. Can’t post them online yet as I am selling them in my component store.
I’ll have to PM you the more elaborate example where the output order changes. I haven’t been able to re-produce the bug from scratch just now but it’s the case with all my more complex audio comps.
Thanks for the samples - it’s really helpful to have some real-world usage to unearth problems. I’m working through them now - a couple of initial notes:
No parameter menus - this is fairly high up our list of improvements
Re-ordered outputs - this will be fixed in the next release
I’ll come back to you once I’ve got useful progress to report on the other issues - thanks for the feedback!
Some more changes in the next release that will affect your examples:
output sample alignment was off sometimes
“Output Buffer Auto” parameter on the Tune page didn’t behave as expected
parameter menus are now implemented
As Malcolm says, if you can do your audio output inside the Engine .tox rather than sending it back out of the Engine COMP you will benefit from isolation from the host process - though obviously we do intend to it to work properly in situations where you do need to bring it back into the host network. If you are using audio through CHOP Ins and Outs then you may want to experiment with the parameters on the Tune page - the Info CHOP has details which may help, and the next release will add even more stats.
the issue I’m encountering is that I have a webRTC engine comp - which therefore handles audio and video i/o, which needs to feed an audio stream to the audio engine comp for separate processing.
So it’s hard to just keep the audio inside the audioenginecomp in that case.
I keep seeing the <!> warning yellow triangle popping up, and if I have any thing a little complex in the audio engine comp it goes static yellow (loading…). Are there settings I’m missing to make this work?
thanks!
dani
this is what I mean btw by a “little complex” - I’m actually bringing in the audio at 8K, and I’ve added an opview - you can see the network in the source component in lower left, while the two parameters panes show the settings of the SpeechModel (audio) Engine Comp, which I’ve tried to tweak to no avail: it always says “Loading…” - if I delete the opview and attached operators, and basically only leave the audio in and out at 8K (the hog is set to 0), the warning sign starts flashing
here’s a toe file which only contains an audio input connected to both the engine comp and the component it’s derived from (which you should save as a tox and use it in the engine comp)
The regular component, “HogSpeech” - works as it should. The SpeechModel engine component doesn’t. it sometimes even errors out - but I don’t know if there’s a way to figure out what the error it encounters is - although often it just shows the <!> yellow triangle…
Thanks for the example. I’ve fixed two issues which you were running into - the OP Viewer TOP was the root of the problem, so if you can remove that you should be able to work until the next release.
You can use an Error DAT to catch errors - useful if they only occur for a frame, as can happen with time-sliced CHOPs such as audio.
Working with audio, you may want to use the Tune page of the Engine COMP parameters to turn In Buffer Auto and Out Buffer Auto to Off, then set a fixed number of frames to buffer - a higher number will reduce the risk of the Engine instance running out of CHOP data, at the cost of latency. Alternatively you could use the Shared Mem CHOPs to pass the audio directly between the TouchEngine instances without it having to be routed through TouchDesigner at all.
thank you very much. I didn’t think about the error DAT, that makes me feel a lot better about Engine comps erroring out mysteriously
I don’t really need the op viewer - it was there just for a monitoring comp, so I’ll remove it, and tune the i/o buffers,
thanks again!
dani
the other problem I’ve had with this, and I may as well bring it up here since you have the toe downloaded, is that unless I force cooking of the trail operators during the “while On” (in the on_change op) with
op(‘trail’+tnum).cook(force=True)
the trail sops only store the first (off to on) and last (on to off) sample of audio.
if, on the other hand, I force cooking, then I see the whole sound wave. Maybe it has to do with activating deactivating them, I’m not sure,
you can see this behavior if you comment the above op(‘trail’+tnum).cook(force=True) in the on_change op->while on and attach a null chop to the output of HogSpeech component, as opposed to uncommenting it. the behavior is the same in the engine comp.
I really don’t like force cooking in this case it seems to point to either an issue or something I am not doing right. Any ideas?
Generally an OP will only cook if its output is being used, and as you only switch to a Trail OP when you set Active to off, it doesn’t get cooked for the on period.