Thanks for this update and for asking for consolidated feedback here. The recent updates have been fantastic and very stable. I’m glad to hear about upcoming wrnch.ai support. I used it in a project, and it’s really fast and accurate (easily 90 fps for 512x256 image with a high-speed camera)
A lot of RFEs get posted in the forums. You’ve given us an opportunity for us to reflect on our requests and pick out our favorites/biggest needs.
- VST/C++ audio in TouchDesigner. This was discussed at the summit Q&A. What’s the current status? There have been requests for FAUST integration, JUCE integration, or more examples of generic C++ digital signal processing code.
- 3D Skeleton example projects. As 3D skeleton tracking data becomes more accessible, it would be great to see some amazing skeleton rigging op snippets, kind of like what was shown in the first inSession live stream. Mixamo and Kinect are barely compatible. As the tech keeps evolving we need to stay up to date.
- Augmented reality features. Apple’s ARKit, depth tracking, image segmentation etc. It’s most likely a huge undertaking and not a one-size-fits all solution for the community’s needs.
Lower tier but curiosities of mine:
- Timeline tools. See @archo-p 's 2019 summit workshop Advanced Techniques in Media Management, Sequencing and Playback - Peter Sistrom - YouTube or @greg 's timeBase timeBase - a Multi-Layer Timeline Component - 2019-11-12 04:25 or
@TimFranklin 's Layer Based Timeline Tool - Beta - 2020-03-24 18:51 - GPU SOPs see my forum post Cooking a directed acyclic graph to make shader code
- Better built-in shadow features: Percentage-closer soft shadows, cascading shadow maps, something a little prettier looking. I don’t know what’s best these days.
- Easier depth-of-field support
- Easier motion blur support
- Optical flow from NVIDIA SDK or some other plugin
- All sorts of machine learning TensorFlow/PyTorch integrations. What do people want? I’m thinking PoseNet (because it’s a free alternative to Wrnch), Hand Tracking, image segmentation, face mesh etc. I think some people are interested in SLAM techniques. Can we get t[xyz[ r[xyz] from a continuous video stream?
UPDATE: TRY OUT THESE PROJECTS