Intel Nuc V8 VR

Hello fellows,

I am about to start a project that needs a 7680x1080 resolution generative content that reacts to people passing by. The displays will be 4 LG 55VM5E.

The hardware solution team offered an Intel Nuc 8 VR. One Kinect V2 MS for interaction.
I made a quick sketch for better understanding of the set-up and from what gives here I can see that the kinect will surely not cover an 120 deg or more and from what I know you can use only one per machine.

I have some doubts/questions about this set-up and I will list them down.

  1. Kinect capabilities : - not being able to put 2 on the same machine. - Too small FOV
  2. Can I output 7680x1080 res from a Nuc8? (or I can split the signal in two maybe, and let the media server do the rest)
  3. Do you recomend using two Kinect Azure? (Do I have to choose another SFF PC because it requires at least GTX 1070?)
  4. Would Intel realsense be a better solution? 4 connected together would give a 162deg coverage. The minimum distance varies a lot on the models but I don’t know about the maximum distance.

Your opinion and experiences would help a lot on this matter so I thank you in advance.

Cheers,
golo

either 2x azure or 2x real sense should get you all the coverage you need, depending on how close to the screen you need to have active, and are both relatively plug and play. With 4 units, you could merge a point cloud and fill all of the holes if you want multiple user interaction. The azures are a bit less noisy and have skeleton tracking, the real sense are a bit cheaper and have a wider field of view.

Best solution really boils down to the kind of interaction you want; are you piloting a puppet, blob tracking, tracking position data, depth thresholding? If you’ve got the budget, I feel like the kinects have the leg up rn, and are more versatile for a variety of installations

There are good powered usb c extenders that I’ve gotten to work at 100 feet, but if the nuc is behind the screens and you only need two, you should be able to get the coverage with a standard set of 6ft cables. Just thoroughly test running all of the units you plan on working with on the production machine, because USB bandwidth can be a bottleneck and you probably have other peripherals as well

It looks like it will support up to 6 simultaneous displays, and your resolution is really 1 4k signal - so it could certainly work for your displays. I think the graphics processing is a little under powered, and a little light on VRAM. I’m also not sure about how AMD cards handle tearing across multiple displays - I’ve had the best luck with NVIDIA hardware, or using a distrubution box so that I’m really only outputting one video signal that’s then broken up across displays.

A good distribution box is going to be close to $2000 USD - which is more than the list price for the NUC ($880 USD). The NUC8 looks like it has a AMD Radeon RX Vega M GH - it’s got 4 GB of VRAM, which isn’t bad.

I think you could make it work, but you’d be happier if you had a different GPU - IMO.

I would like to have a skeleton tracking with just some points like (hands, head, hip). As you can see the hallway is not that wide (~2.80 m) and the screens cover almost 5 m.
I did not have the chance to get my hands on an azure. Can you have multiple players like you could in Kinect MS?
Another point is that in the specs of azure they require an Nvidia card. Or I can work with the Vega Radeon as well?
Do you think a custom SFF pc would be a safer option for the installation?

As @raganmd pointed out, there are other reasons too why an Nvidia card would be desirable. For synced multi screen output, the quaddro series cards are the gold standard, and allow for synced outputs without tearing, edid management, and all of the features of TD that require Nvidia just work.

I use a sff with a p5000. If that’s an option I would certainly prefer it to the nuc, although now you need to find a spot to put it, it won’t fit behind the screens as easily. You will also be up against the 4 screen limitation of Nvidia, making callibration a bit tricky. If money were no object, a datapath fx4 gets you the sync without Nvidia, but you still will need for the azures, and that adds a non negligible amount to the bottom line.

I believe the azures can track up to 6 players

Another thing you could look into if you can swing the license is the new wrnch chop in the experimental build. The license isn’t cheap, and you need td pro as well, but i hear the skeleton tracking is superior to Kinect with just an rgb camera

I think quaddro will be a bit of a stretch. Maybe I can handle with a 2080RTX. They will have a media server that transmits through Ethernet at the 4 displays. They also want to use a platform called PADS4 to have a dynamic digital signage environment. The other option would be 4 hdmi signals to each display and PADS4 doing the switch beetwen sources. I am still researching and in discussions with the guys from PADS about how we will combine the two things.

“swing the license is the new wrnch chop in the experimental build” I am not sure what you mean. I have a 2019 license myself and the last build 2020 and did not find the chop. Does azure require a pro license? Thanks

wrnchAI CHOP is in the new experimental builds only, so you won’t see it in the Official builds until later this year.

It requires a TouchDesigner Pro license and a separate wrnchAI license.

Thank you. Oh is for body tracking from a video source. I don’t need that in this project but is good to know.