Yeah, I’ve run a few experiments capturing full XYZ data and it is pretty limiting.
I’m pretty confident just capturing the depth map is sufficient to recreate the point cloud - I believe that is all Microsoft is doing in their mkv files with the recorder utility.
I expect we can add that python function in the next few weeks, but as always I’d recommend making whatever backup plans are reasonable since I can’t say for certain what surprises may pop up.
sorry for the unresponsiveness over the last month.
I was on the shoot. And I actually had to capture with full XYZ.
It was horrible to be honest. Really had to fight performance issues.
So really hoping this gets implemented soon, so I don’t have to redo this for a possible next shooting.
Also now I have the XYZ and RGB data, but really want to be able to work flexibly with it.
So again, these functions would be very good to have:
I suppose transforming XYZ back to depth only is just taking the blue channel?
So no need for an extra function here I guess.
Would be great if you could keep me updated about any progress.
Thanks a lot!
Sorry to hear about the difficulties, but capturing everything was probably the safest way at the moment.
I did make some progress on the depth to point cloud function - there were a couple of extra tricks necessary to format the numpy arrays, but the proof of concept seemed to be working pretty well. I can probably clean that up and get you a build early next week.
Additional transforms (depth to color, color to depth), should work in pretty much the same way.
sounds great thanks!
Let me know once you have it ready. Would ideally like to test it once the other functions are also available.
I’ve got the depthImageToPointCloud working pretty well now - the point cloud calculated from the cached depth map is functionally identical to the live streamed version.
It’s a little slower than I hoped because its all done in the main cpu thread, but its definitely fast enough to run in real-time. The Engine COMP might be a good option to move the decoding into a separate thread.
I’ll get a build put together today for testing that you can experiment with.
Nice. Would be great if you could include an example setup of how to use it.
Not too familiar with using NumPy etc. in Touch.
Here’s a link to the experimental build as well as a toe file that demonstrates the function. In the toe file there is a Script TOP that pulls the depth image from null1, runs the transform and then puts the results into the script top. There are then some extra nodes to compare it with the direct point cloud to make sure they match.
This should work just as well with depth images that are loaded from a Movie File In TOP, but you will still need the kinect node to do the transform since it has the calibration info. Eventually we could look at a way to save/restore the calibration data so that you don’t need the camera connected.
Let me know if you have any issues/questions.
transformDepthToPointCloud.toe (17.7 KB)
I’ll have a look at it.
sorry for the late reply. Was busy with some other parts of the production, now back working on the depthmaps.
Thanks a lot for implementing this. It seems to work perfectly.
I wish I would have had this before shooting, could have saved me a lot of trouble and data while recording. But should have realized this earlier. Anyways, next time will be smoother.
3 things remain that would be amazing if they could be addressed:
The Touch build you posted here does not seem to be able to read from network locations strangely. This is important for me as I have TB’s of depth data on my NAS. Could this be fixed? Or is the feature already implemented into the main release, and there network locations are working?
These 2 functions would be very handy to have implemented in a similar fashion:
A way to lock / store the calibration data. Locking/freezing the Kinect TOP works inside an open project, but not once the project is reopened. As I have the Kinects available it is fine for now though, so not very urgent for me. But guess this will be important in the future and for others for convenient use.
hey jacques - I’m dealing with pre-recorded Kinect footage, so I only have the color/depth maps to work with.
Sorry for the slow response - schedule has been a little off for the holidays.
No idea off hand about the network reading. This build was branched a few months ago, so it may be a little out of sync with the main release. I think we’re probably ok to merge this now, but I’ll confirm with the team once we get back to normal after the holidays and let you know.
It shouldn’t be too hard to expose those functions now that I’ve got a framework from the other ones. I will look into it.
Unfortunately, this one is a little more awkward because of how the node currently requires an active connection to a camera. I’ll see if there’s a reasonable way to do it without reworking too much of the code.
If you still have access to the kinect camera that recorded it, then you can use the build and toe file I posted just above to convert it into a point cloud.
There is also a new depthProjection component in the latest 2022 release in the PointClouds folder of the palette that will project depth maps into 3d points.
no worries for the delay.
Yeah not sure, but I tried everything, and network reading works in 2022.26590 but not in 2022.29231.
Would be great if it could be merged with the main build, suppose that’ll solve it.
Would be great! Let me know
I see. Again for me not super urgent currently. But I suppose pretty important for other use-cases, with rented cameras etc.
Here’s a link to an updated build: Dropbox - TouchDesigner.2022.31261.exe - Simplify your life Hopefully this resolves the networking issue.
It also includes new functions for the color space to depth space transforms and vice versa. Note, the color to depth transform requires both the original depth and color images (just a requirement of the sdk).
I’ve updated the demo file to show the new functions: transformDepthToPointCloud.toe (18.2 KB)
I’m still looking at options for doing this with the camera offline.
thanks for the update.
Unfortunately the network reading issue is still not resolved for me.
See screenshot. I get an error from the Touch explorer, but in Windows the network location is present.
That’s really odd. That file browser is just a builtin windows library, so we don’t have a lot of control over what it has access to (it just returns to us the selected file path).
Does it make a difference if you open TouchDesigner in administrator mode? Can you access that folder via other versions of TouchDesigner?
Yes, as I mentioned, in 2022.26590 (official release) I can open the destination / folder fine.
No network locations are accessible from the 2 last builds you send me, not just this location
No difference running as Admin.
If you shift right-click on an app in windows you can also choose to run the program as a different user. Is it possible you need to run as a user that has permissions on that network machine?
Also worth checking are the firewall permissions for each instance of TouchDesigner.
Not sure yet why this version would be any different than our official releases, but still looking around.
No, I am running all versions of Touch with the same user. The user account has access to all network drives. This can’t be the problem.
It’s really like this:
- I start with admin user TD 2022.26590 → Load file with MovieFileIn → Try to navigate to any network location: fine
- I start with admin user TD 2022.29231 or TD 2022.31261 → Load file with MovieFileIn → Try to navigate to any network location: no network locations showing (if I paste the path it errors)
Just to confirm, did you look at the window’s firewall settings for the particular exes? I gather that can block a network drive for just one application.
Also, have you tried any of our newer official releases after 2022.26950? Both of those branch builds would include newer changes aside from the kinect stuff, so I just want to rule out that nothing else changed in the core code that could be affecting things.