TD-ML - Machine Learning Toolkit for TouchDesigner
Hello,
Bravo for the work, I am trying to replace wekinator with it, I will ask some question on discord.
Thank you,
Jacques
Hi Jacques, thank you - happy to hear! ![]()
Check, catch u there.
Cheers,
Joel
This looks incredible. Would it be a good candidate for training hand pose detection like âthumbs upâ from MediaPipe or Leap Motion data? I see you map it to visual parameters but not sure what the training workflow for that would look like?
Hello,
I just build with it a tool using nVidia face tracker to recognise simple emotions and it works well. And there is an example with a âstone, scissorâŚâ leapmotion device.
Hi @cem_futuretense,
yes it surely would - as @jacqueshoepffner mentioned, there are a few examples for the MLP Classifier in the âTD-ML.toeâ file, one of which illustrates such a workflow with a leap motion. (Classifying full leap motion CHOP stream of the left hand into âRockâ, âPaperâ or âScissorâ)
As this works the same for any CHOP, it should be directly applicable to e.g. Mediapipe tracking data or anything else really, that is represented in CHOPs. I will try to record a Video-Tutorial on (MLP) Classification this weekend but for now I hope the TD-ML.toe file helps. (sorry for the delay there, iâm not experienced with doing tutorials)
About the workflow: Note that in the âDatasetter.toxâ you have the par pages âInput Xâ (input data), âInput Yâ (output data) and âInput Wâ (weight data, optional) which refer to the COMPs Inputs 1-3. When training a classifier you need your âY Inputâ to be text/DAT as you want to have Labels as an Output. For that, the mentioned Par pages provide a menu per Input where you can change the type of the input (CHOP or DAT). Setting it to âdatâ for âInput Yâ changes the COMPs second input-connector to be of DAT Type and lets you pair your CHOP / tracking data (input x) with text /labels.
About Value Ranges of the Input CHOPs:
All of the current neural-networks have scikit-learnâs âStandardScalerâ implemented in its internal Pipeline for Training & Inference, that means you donât have to ever worry about normalizing or re-transforming any of your data, as the âStandardScalerâ does that for you automatically. Just feed in you values as they are.
But, of course, if your CHOP Value Ranges changed since training / dataset creation and the neural-network isnât trained on that new ranges, it wonât perform very well or not at all, as it is only trained on the old value ranges.
On Neural-Network Sizes:
The size of the neural-network is set by the par âhidden_layer_sizeâ in the âTrainingâ page. (putting â50, 50â (without quotation marks, as comma-separated string like this: 50, 50 ) value there would create a neural-network with 2 hidden layers of size 50, putting â50â will create one hidden-layer of size 50 and putting â30,30,30â three hidden-layers of size 30 and so on.) Figuring out the best size for your usecase - I canât help with a general rule of thumb atm and would encourage own experimentation and online research in this regard.
About cooking times of MLPs :
CPU Cooking Times depend mostly on the number of input channels and on the neural-network size, 1 moderately sized (â50â) mlp classifier with 150 channels in its input CHOP would propably amount in latency between 0.25-0.6ms cpu cook time, which can add up the more instances of it you use. In case that gets problematic at some point:
- turning the âPredictâ Toggle Par on demand on/off can help - but this can also cause very short spikes in cooking time up to 1.0-1.5ms, theyâre very short - so overall its most likely always still more performant to only infer/predict when you know you really need it, but something to keep in mind when turning a lot of MLPs âPredictâ on/off frequently, as a scenario with a lot of these spikes happen at the same time can cause bad things i guess.
On Updates:
I created a âupdatesâ channel in discord which is webhooked to any changes on GitHub, so there will be notification for any updates there. But for major updates Iâll also comment here in the forum / edit the community post page.
Hey Hey - there is a larger Update (v.0.2) including:
- VAE - Variational Autoencoder, new Neural-Network Tox Component
- Multi-Label Classification for MLP
- Threaded Training (optional), avoids freezing TD but is much slower then training in Main Thread
- UMAP Rework
- Audio Classification Examples
- Large Code Refactoring
- Inference Optimization
- Minor Bug Fixes
- Community Post Page / ReadMe Update with Instructions for CUDA Usage, Future Developments
- Updated requirements.txt - !make sure to recreate/update the environment! as some package versions have been updated, while some packages were removed
Good stuff @bi.os
Nice to see the TDPyEnvManager being used. I see you are using conda for your environment, but you also mention a requirements.txt, did you try relying only on a pure python vEnv at all? If yes, what was the blocker?
This is to be expected with the 3.11 implementation of Python. But this will change in the future since the good Python folks removed the GIL in 3.13+. Are you using the ThreadManager for your threading tasks? Are you recreating a task all the time or keeping one alive awaiting jobs?
Best,
Michel
Hello,
I just installed yesterday my system on the company computer for the first rehearsal with face tracking and machine learning.
The system is mainly working with the learning and training I have done before to come here, far away from my office and my building computer.
What is going well:
â just restarting the system, the venv is working with all the dependency inside the main folder. 10 occurence of face tracking are analysing 10 faces without problems (4K input via BlackjMagic).
ButâŚb
If I want to redo the data harvest with more realistic conditions, it works but if I try to train the classifiner, there is always an error :
on the red flag (no copy possible)
extmlpcclassifiedStored <AttributeError: /sys/local/modules/TDStoreTools, line 268, in strC:/User/nye/DesktopâŚ
in the textPort
Traceback (most recent call last):
File â/project1/mlp_classifier0/helper_modulesâ, line 25, in _table_to_numpy
ValueError: could not convert string to float: ââ
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File â/project1/mlp_classifier0/parexec_mainâ, line 22, in onPulse
File â/project1/mlp_classifier0/extmlpclassifierâ, line 71, in OnPulse
File â/project1/mlp_classifier0/extmlpclassifierâ, line 407, in Train
File â/project1/mlp_classifier0/helper_modulesâ, line 27, in _table_to_numpy
ValueError: Non-numeric value at r=0, c=136 (ââ) after header slicing.
Traceback (most recent call last):
File â/project1/mlp_classifier0/helper_modulesâ, line 25, in _table_to_numpy
ValueError: could not convert string to float: ââ
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File â/project1/mlp_classifier0/parexec_mainâ, line 22, in onPulse
File â/project1/mlp_classifier0/extmlpclassifierâ, line 71, in OnPulse
File â/project1/mlp_classifier0/extmlpclassifierâ, line 407, in Train
File â/project1/mlp_classifier0/helper_modulesâ, line 27, in _table_to_numpy
ValueError: Non-numeric value at r=0, c=136 (ââ) after header slicing.
Do you think it is possible to send you my TD project (without the dependency)?
I come back in Paris tonighht and I have 4 days before the next rehearsal.
TD-ML_Maloufi-complet.toe (2.2 MB)
Hi @JetXS, thx!
no actual blockers, just (bad) habit. i mean i use both but i guess there is no point in conda anymore, since i removed environment.yml becasue conda install caused issues suddenly for me, should have removed conda references in docs, etc completely then aswell, will do that now
I see, didnt knew about GIL. My guess was that its somewhat expected and wanted to ask that actually, good to know, thx
i just started using the threadmanager but yea i recreate tasks on each train call but prevent multiple calls while train still ongoing on same comp. task creation adds some overhead i assume? should i better look into keeping one alive in this situation?
Best,
Joel
@bi.os - did the conda install from the TDPyEnvManager start failing? or a more general issue on your machine?
I think every time there is an overhead that can be repeated and could be optimized by keeping things around / in memory, itâs better to start a task, load everything thatâs needed, and enter a loop in that task so that itâs idling. Use a queue to send work to your idling task.
I do that for inference, since you can load everything once, and then wait for frames to be processed and you avoid some overhead. Might be slightly different for training though.
I should look into creating a premade idling task / template for that purpose so that itâs easy to use and no setup is required.
Hey @jacqueshoepffner, I looked into your file - the problem is caused by your dataset - the last column in the X-Table is empty. I removed it and then it works.
on this remark, i noticed that - the âfill empty cellsâ pulse in data sanatize page of the datasetter appears to be broken, will check that. but in ur case, if the complete col is empty, there is not much point in filling it with some value, better just remove it if it doesnt hold any valuable information.
Thank you, when I come back tonight, I will try.
@JetXS - I canât tell u exactly as i was rushing for a fix and didnt properly logged or investigated those, but iâm relatively sure it was due to general issues on my machine as it appeared suddenly after (messy) changes done by me, not after updating td. also didnt had any issues with TDPyEnvManager and conda install the months before. if i give it another clean try with conda in the future ill let you know if i still face issues.
Okay, will look into that, but yea - a premade idling task / template for this sounds nice! thx
Hello, finaly back home ![]()
I followed your advice, rebuilding dataset and now it works well.
My question concern the different files. I began with your .toe, erasing everithing except the ârock, paper, scissorâ who suits my needs. How can I rename it ?
I want to know where exactly are saved the files from my project, what can I throw without problems.
As you perhaps see in my project, I learn and train the system with my face and I want to use the same dataset and classifier for 8 classifier for 8 face tracker.
How can I connect the new classifiers to the trained one ?
Have you the project to publish a more extended manual ?
In any case, the rehearsal went well with your device. Unfortunately, the audience was composed from dance professional not very generous with their emotions.
Next week, it will be a new one with a real and younger audience.
Than you for your time,
Jacques
PS with the new version, I have this error:
C:\Users/Association Aladin/AppData/Local/Programs/Python/Python311/Lib/site-packages\sklearn\base.py:376: InconsistentVersionWarning: Trying to unpickle estimator Pipeline from version 1.8.0 when using version 1.5.1. This might lead to breaking code or invalid results. Use at your own risk. For more info please refer to:
warnings.warn(
My question concern the different files. I began with your .toe, erasing everithing except the ârock, paper, scissorâ who suits my needs. How can I rename it ?
Do you mean the trained model itself? So there is a folder par on the MLPs, which by default points to âdata/models/â which is located then in your project directory (or created if doesnt exist yet). if you hit âSaveAsâ you can put a name and it will be saved as .joblib file to this folder.
How can I connect the new classifiers to the trained one ?
When your other instances of the classifier have the same folder path, you could just load the just saved model into them by using the dropdown menu / model par menu which displays all models found under the specified folder path. Selecting it from the menu loads it already, but there is also a âRelaodâ par to reload the model selected via the menu. (the readonly string par - âModel Activeâ shows you whats actually loaded) Once you saved it on the trained classifier you can just select and load the saved model from the other classifiers.
Have you the project to publish a more extended manual ?
Yes that will happen, hopefully sooner then later.. ill try my best to get this done in the near future.
PS with the new version, I have this error: [âŚ]
Hm, yeah thats a warning when trying to load a model that is trained on older/newer sk-learn version. However, scikit-learn version didnât got updated with the new update by me. whats also strange is that your warning is refering to sk-learn version 1.5.1 which i have never used / which was never referenced in the repo. 1.8.0 is the correct version. You can check which version you have actually installed into the vEnv by doing (from within your td project):
import sklearn
print(sklearn.__version__)
But from your user path in the warning it looks like you might not have created an extra vEnv for this? And that there was an older scikit-learn installation already in there, which didnât got overwritten when you were installing. So in TD you have loaded the wrong scikit-learn version i believe, if true â either you manually update the scikit-learn version to 1.8.0 or you create a new clean vEnv via the TDPyEnvManager from Palette (âcreate vEnv by requirements.txtâ) and the provided requirements.txt from the TD-ML Repo and then only load that into the project.
@jacqueshoepffner @bi.os sklearn is not pointing to a version installed in a vEnv managed by the TDPyEnvManager but a global python install:
C:\Users/Association Aladin/AppData/Local/Programs/Python/Python311/Lib/site-packages\sklearn\base.py:376
Likely have something set in Preferences ?