Import image data form external python process

Hi,
I’m experimenting with stylegan2-ada-pytorch to generate live visuals from custom trained models.
So far I ran the process from an external anaconda environment, converted the resulting tensor/numpy arrays to image files and then I retrieved them from disk with a moveiefileTOP, which is obviously slow and inefficient.
Can someone give me some clues about the less expensive way to import external process data in Touch?
Or am I suppose to run the script from within touch?
I’m a bit scared of getting caught in dependency issues…
Any advice would be really appreciated!

I’m running Python 3.7.10
These are the libraries I need:
pytorch 1.8.1
CUDA 11.1
ninja
opensimplex
os
subprocess
re
click
dnnlib
numpy
PIL

Gpu:RTX20880

Hi

Search on TD community site for Vasily, you will find posts and tuto about external python machine learning thanks to a Spout version he ported.
That works fine, I use it for a personal trained cyclegan, I don’t run crashes. The advantage of external it runs on other cpu gpu processes than TD.

The fastest way is probably to write your NumPy arrays directly to a TOP/CHOP using the Script TOP/Script CHOP’s copyNumpyArray methods.

See examples for both in the Op Snippets

Yes you can redirect Td to your anaconda libraries and use script Top, but machine learning need a lot of ressources so you should use a second Td instance in this case

Or try to get the Pytorch running in a C++ TOP. I gave some more tips here Error building Pytorch TOP with CMake · Issue #16 · DBraun/PyTorchTOP · GitHub

Which is why I’m pushing for upgrade to Python 3.9 so we can start to use the multiprocessing: shared memory feature between python processes.There is a supersweet example for reading a NumPy array from two different Python processes on that last page.
This means in a few months you don’t have to run your Anaconda process in TD, but can access the NumPy results directly from TD!

Thanks guys,
for your multiple suggestions.
For sure I would start from implementing spout which seems to me the most out of the box solution for quick prototyping, but I see what @nettoyeur suggests -regarding Python 3.9- very promising for my needs.
As regards @shieman and @DavidBraun, I have a few things to process, hope to come back soon with successful updates! :crossed_fingers: :crossed_fingers:

One last question, just to explore all the possibilities for future/final developments: if I planned to run GAN script on an online virtual machine (eg. Paperspace) and send the output to my local one to process it with touch, would the scenario be radically different?

All the best!

Also

thanks @nettoyeur, I had a look at op snippet and it seems exactly what I was looking for. Just to clarify: to achieve this I have to run the process directly from Touch as explained in TD Summit 2019 – External Python Libraries – Matthew Ragan, right?
At least until TD would update to Python 3.9…

Here’s Vasily’s profile and I think his ML_Lego tutorial, asset, video package might be a good resource for you.

1 Like

I think links posted by @ben are good solution. The way of @DavidBraun is good too but with some some ML torch script or onnx don’t work. In my case cyclegan can not be exported in torch script or onnx yet.
So I choose spout and separated python to separate processes.
Vasily solution works fine and there is a stylegan tuto if I remember.

1 Like