Does anyone know if there are examples on reading the Depth Point cloud into the OakD Select Pop? The Examples file from Derivative still works in the new 2025 Build of Touch and there is an oakselect pop but not a lot of documentation on how to use it. Any guidance would be appreciated!
sorry, this should have been in the OP Snippets and we’ll get it in there for the next release.
The OAK Select POP works with DepthAI’s pointcloud node (dai.node.PointCloud) which takes the stereo node’s output as an input.
# me - this DAT
# oakDeviceOp - the OP which is cooking
import depthai as dai
def onInitialize(oakDeviceOp, callCount):
return 0
def onInitializeFail(oakDeviceOp):
parent().addScriptError(oakDeviceOp.scriptErrors())
return
def onReady(oakDeviceOp):
# We get the depthai.Device object as `device`
# https://docs.luxonis.com/projects/api/en/latest/components/device/
# if device := oakDeviceOp.lockDevice():
# queue = device.getInputQueue("config")
# queue.setMaxSize(4)
# oakDeviceOp.unlockDevice()
return
def onStart(oakDeviceOp):
return
def whileRunning(oakDeviceOp):
return
def onDone(oakDeviceOp):
return
def createPipeline(oakDeviceOp):
# This example creates an RGB camera.
pipeline = dai.Pipeline()
camRgb = pipeline.create(dai.node.ColorCamera)
camLeft = pipeline.create(dai.node.MonoCamera)
camRight = pipeline.create(dai.node.MonoCamera)
stereo = pipeline.create(dai.node.StereoDepth)
pointCloud = pipeline.create(dai.node.PointCloud)
xoutPointCloud = pipeline.create(dai.node.XLinkOut)
rgbOut = pipeline.create(dai.node.XLinkOut)
camRgb.setBoardSocket(dai.CameraBoardSocket.CAM_A)
camLeft.setBoardSocket(dai.CameraBoardSocket.LEFT)
camRight.setBoardSocket(dai.CameraBoardSocket.RIGHT)
camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
camLeft.setResolution(dai.MonoCameraProperties.SensorResolution.THE_720_P)
camRight.setResolution(dai.MonoCameraProperties.SensorResolution.THE_720_P)
stereo.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_ACCURACY)
stereo.setSubpixel(True)
stereo.setSubpixelFractionalBits(5)
stereo.setExtendedDisparity(True)
stereo.setDepthAlign(dai.CameraBoardSocket.CAM_A)
stereo.setRectifyEdgeFillColor(0) # Black, to better see the cutout
# set output stream names
xoutPointCloud.setStreamName('pointcloud')
rgbOut.setStreamName("rgb")
# link operators
camRgb.video.link(rgbOut.input)
camLeft.out.link(stereo.left)
camRight.out.link(stereo.right)
stereo.depth.link(pointCloud.inputDepth)
pointCloud.outputPointCloud.link(xoutPointCloud.input)
return pipeline
Hi @snaut Thank you! This worked perfectly for inputting the Point Cloud. The Point Cloud in Pops looks way smoother then it was in the old depth example which is awesome!
Unfortunately I’m getting an insane amount of points coming in from the DepthMap Image. This doesnt seem right as the OakD isn’t outputting that high a resolution. When I try to instance these point clouds in a super basic way it is tanking the frame rate of my computer(goes from 60fps to 7fps). It seems like there are just too many points to render. Do I need to add anything in front of the OakSelect to filter out unused points our something? I tried instancing with the BoxPOP and Box Sop and got the same results.
i thought there are 2 ways to adjust the resolution of the pointCloud node but seems like this has no impact with setting the sparsity actually hanging the process right now (we are having a look)
Apart from this it seems it is basically the size of a 1920x1080 texture. With POPs there is not really a need for instancing btw. You can make use of the Copy POP or render the POPs straight up using a Line MAT or similar.
Hi @snaut Ah got it. Sorry I’m still a bit of a POPs noob. Final question is how to merge the RGB data into the POPs point cloud. I tried using the lookup texture POP but there is no TEX attribute or colorindex attribute as shown in the opsnippets for Lookup Texture
It looks like the setDepthAlign was forcing the resolution of the MonoCamera to match that of the color camera which is why it was always at the max resolution if you remove that line and change the dai.MonoCameraProperties.SensorResolution it should give you a smaller PC.
As Markus mentioned you can also use the sparsity parameter, however we discovered a bug that should be fixed in future releases.
As for the RGB here’s a small tox file that takes the image from OAK Select TOP and combines it to each point of the PC.
Thank you @snaut@huck ! I’ve got exactly what I need. I’m gonna mark this as solved but I did get an error when I tried to run your stereoDepth.tox, it said that your OAKD Callbacks were calling at overridden functino setupPipeline instead of createPipeline. I also saw you are using a newer version of TD. Did the workflow change possibly? Just want to point that out, but I’m good to go. Thank you!
Whoops that was the wrong tox. The one before was using DepthV3 (which includes workflow changes) however this will not make it’s way into Official quite yet so feel free to ignore that.
@mattrossalbatross I am wondering if you were successful with the Oak Select POP. I did manage to get the number of points under control, but the depth output doesn’t look right. The points seem rather noisy and randomly distributed. Nothing that resembles a depth representation of anything that is in front of the camera.