Hey forum! I’d like to get speech recognition working with touch. I followed this tutorial to create a speech-to-text python environment.
I’m wondering if
A) this python program can be set up and run inside of touch, or if it needs to be run separately,
and B) how to parse the printed lines into either a text TOP or to a CHOP channel.
I’m pretty new to python, and any insights will be helpful. Cheers!
I’m not super familiar with this speech-to-text method, however I don’t think that TD would support this kind of work directly (that is to say internally) but it may allow you to stream data from outside into TD. I’m not super confident in DATs however, which is where most of this would assumedly take place. I’d suggest looking into some DAT tutorials and resources from Derivative to see what people have already done. Another great source is alltd.org. They do a ton of cool stuff over there, and who knows, someone may have already made a component that does this!
Hope this was at least somewhat helpful!
Well, you could run this program using subprocess.Popen(), this way you an start and close the system from touch.
For sending the message, in lin 195 is the printOut. I would simply use the webServerDAT and send the found text-part as a request to my mashine. This would even make it better usable as you could have one dedicated computer running this and send the found words over network.
Thanks for the resource @luxnaut Looks like I need to do a deep dive into the use of DATs. I found this thread about Poetry which allows the possibility of running a virtual environment within touch. Will look into this further. RFE: integrate Poetry package manager with TD
@alphamoonbase I like the idea of sending the print out over the network.
Would I run this in a text DAT and reference my python project?