I’ve started playing around with opencv to do some background subtraction and get player outlines on the microsoft azure (kinect) module. The native one seems to be so slow and laggy. I’ve been able to do it in pure python fairly easily and quickly using the depth channel from the kinect, but have been struggling to get this working in touchdesigner. My python code is as follows:
import cv2 as cv
import numpy as np
bg_mog2 = cv.createBackgroundSubtractorMOG2(history=100, varThreshold=16, detectShadows=False)
for i in range(1, 1789):
frame = '/Users/d/Desktop/opencv/images/{:04d}.png'.format(i)
frame = cv.imread(frame)
frame = cv.medianBlur(frame, 5)
mask = bg_mog2.apply(frame, learningRate=0)
cv.imshow("frame", frame)
cv.imshow("mask", mask)
key = cv.waitKey(30)
if key == 27:
break
I’ve tried doing something similar in the script top, but I’m having difficulty getting the mask back into touch and am wondering if I am missing something here?
import cv2 as cv
import numpy as np
bg_mog2 = cv.createBackgroundSubtractorMOG2(history=100, varThreshold=16, detectShadows=False)
def onSetupParameters(scriptOp):
return
def onPulse(par):
return
def onCook(scriptOp):
frame = op('OUT_image_stream').numpyArray(delayed=False)
frame = cv.medianBlur(frame, 5)
mask = bg_mog2.apply(frame, learningRate=0)
res = np.zeros((424, 512, 4), dtype=np.float32)
alpha = np.ones((424, 512), dtype=np.float32)
res[:,:,0] = mask
res[:,:,3] = alpha
scriptOp.copyNumpyArray(res)
return