Streaming large CSV's into touchdesigner

Hi, I would like to stream large CSV/JSON formatted data into Touchdesigner, for small CSV files, I can open them as table, transpose and start sending one by one the same way Matthew Ragan suggests in one of his tutorial. But for large data I would like to have it streamed instead of having TD read all the data.

Time for some python I would say! Opening a file is relatively easy. You can then use then readline method to read a specific line and use the split() command to get a list of entries.

1 Like

yeah looks like there is no escape for learning python… so with this method, i dont have to send via TCP or UDP right ?

I don’t really know what you mean by that. If you want to send the data from one computer to another, sure, you ha e to use tcpip or udp. But to really give you a better answer I don’t have enough information about what files you want to ‘stream’ from where in what etc.

Basically I would like to have TD read large CSV files with data on them, such as EEG data, Some security log frequency data etc., those files don’t need to be sent via UDP or TCP, if TD can read them line by line on the same computer, this solves my problem :slight_smile: For some reason I always thought TCP or UDP stream would be the only way to achieve this, but I was wrong… its time to learn some python :slight_smile: thanks

TCP/UDP are network protocolls :slight_smile:
Would you mind sharing one of the csv files? Sounds like an interresting idea to try myself.

Of course, I am sharing my EEG data during a 13 min teams meeting, discussing about a new customer, let me know what you have done so far :slight_smile:

you can download the compressed csv file here

well, I know TCP and UDP :slight_smile: I have been thinking to use apache ni-fi ETL tool, two processors, readCSV and putTCP, or putUDP but i wasnt able to configure ni-fi, Elastic “beats” or “logstash” could be used as well. you can still use them on the same computer or on the network, doesnt matter

pandas library for Python can read very large datasets rather quickly, with minimal lines of code: