When working with ML pose landmarks it’s really useful to be able to filter them based on confidence level, so you don’t end up with body parts bouncing all over the screen when they have a super-low confidence level. Right now you can do it if you really try with a Script CHOP to find all the confidence levels over x (0.5 is a good default), find their corresponding coordinate channels, and only use them… but it’s a lot of work.
A much more convenient way would be to include a confidence level par in the Body Track CHOP so it only outputs data for a landmark if the confidence level is over that value. That should greatly smooth the tracking out and makes it a lot easier to use.
You can use our MediaPipe implementation as a reference of how we implemented it: Releases · torinmb/mediapipe-touchdesigner · GitHub
(The Nvidia pose tracking is way better than the current MediaPipe one, especially for multiple people so I’m very excited to use the Body Track CHOP instead for my current project).
question to better understand:
when you say
does that mean the channels would assume a default value when below a confidence level or actually would not be present in the CHOP at all?
A bit more simpler way to deal with the data is for me to shuffle it so that the
confidence values are in a single channel that can be analyzed.
For that I first remove the first 5 channels of the Bodytrack CHOP and then Shuffle the rest by
Sequence Every Nth Channel - N set to
The resulting confidence channel I can either use in a Delete CHOP to remove samples from all channels (including an id channel so i still know what belongs to what) or a Logic CHOP where I can create a channel that is
Off When Outside Bounds for the confidence values
Oh hey @snaut thanks for the tip, that’s a great way to solve it!
After doing some more testing, I realised the actual issue I was having was the Body Track CHOP just deciding “nope, no humans to see here” quite often during my tracking. e.g. if you put your arms straight up in the air, once your arms go past a certain point you vanish from the tracking.
So for now I think my issue is with the accuracy of the underlying Nvidia model rather than anything I can fix in TD, so I’ll have a play with some Python libraries instead and see what works. I’m sure Nvidia will update the model at some point