Items with no label
3335 Discussions

Are any of the Real Sense operations aware of the Movidius NCS stick?

RTasa
New Contributor I
4,019 Views

Also can any of the operations be accelerated with the NCS stick. How about with 4 of them?

I was looking over the SDK and noticed facial recognition, voice recognition and a few others

that are DNN favorites. So since you guys work with each other it would be interesting for you

to have a Movidius NCS accelerated Real Sense SDK that could off load more operations with

the more sticks you add.

Seems like a natural pairing

0 Kudos
15 Replies
PSnip
New Contributor II
930 Views

Good point ChicagoBob.

My assumption was that on D435 and D415, Intel may be already using the Movidius processor for depth computation. However, I am not 100% sure about it.

I am also hoping that NCS App Zoo will get more and more stuff in future. As of now there is not much there except some basic examples.

About Facial Recognition, I have not tried it myself. Is it computationally too expensive?

0 Kudos
MartyG
Honored Contributor III
930 Views

Gary Brown, the director of marketing for Movidius, was asked about whether it would be possible to combine NCS with RealSense in an August 2017 interview with InfoQ He replied:

"Yes, of course. For developers creating deep neural networks requiring depth information, Intel RealSense technology can be combined as an input to a deep neural network running on Movidius Neural Compute Stick."

https://www.infoq.com/news/2017/08/movidius-neural-compute-stick Q&A with Movidius, a Division of Intel Who Just Launched the Neural Compute Stick

0 Kudos
PSnip
New Contributor II
930 Views

Hi Marty,

Thanks for sharing it. Really interesting interview.

However, I think ChicagoBob wanted to know if it Intel plans to release SDK with NCS. So basically, as a programmer, we only call the APIs, and SDK does the job of deciding what to run on NCS.

I would like to see some initiative by Intel, where they combine Realsense, NCS, to build a platform for developers. Something similar they have already done on Euclid. Recently I talked to some one at Intel and they said that there are no such plans. As per my knowledge movidius and realsense teams are still pretty separate at Intel, and they have no plans to collaborate. So it is up to the users to combine Realsense and NCS. But that may not be what ChicagoBob wanted?

Regards,

PS

MartyG
Honored Contributor III
930 Views

Yes, I knew that this probably wasn't what ChicagoBob was looking for. I figured that some news of interoperatibility was probably better than none though. Without an SDK, I guess that one might be able to make some form of connection to NCS via the D435's GPIO sensor connections for linking to external devices, though it's unknown what you could usefully do with such a link.

I imagine that some enterprising person with tech skills might find a way to create a mod to access NCS functions on a setup with a RealSense camera attached. If NCS has a channel for accepting RealSense as an input, then you would think that the door could be made somehow to flow the other way, from NCS to RealSense.

0 Kudos
RTasa
New Contributor I
930 Views

I just got the NCS stick yesterday. Got an old laptop added an SSD and built an environment to work with.

Not totally done yet with that, currently Python doesn't know what Tensorflow is.

Anyway, as solution developer who run a million miles an hour in all directions at the same time its hard to

dig deep into any tech that is not producing some positive results out of the box. Running the NCS getting started

examples made me scratch my head.

The getting started guide is really lacking.

There was no information as to what the NCS helped with, what feature was performance enhanced and what should I expect.

I am going to post this question on the developer support site. Since DNN and RealSense are solutions with some

synergy you would think that the whiz kids at Intel and NCS would instantly find ways to make

the solution end to end and be able to point out how great it is to do so. The IPP (Integrated Performance Primitives)

which I think is under the sheets of the RealSense stuff has been around for a long time and those coders should

be able to use the NCS primatives to squeeze the fastest results don't you think?

Me, I am still trying to figure out what I bought.

Maybe thePaintedStripe can let me in on what I should be seeing.

I got results some numbers and results with explanation as to what I am even running.

0 Kudos
MartyG
Honored Contributor III
930 Views

I had a look just now through the NCS Getting Started manual that you mentioned. I don't have the stick myself and so can't run practical tests. I'm pretty good at converting manual-speak into plain English though, so I'll have a try. I can see your point - I have an engineering degree and I was scratching my head too.

What do I do with this stick?

The manual thinks that the main uses for the stick will be to either

(a) set up a neural network on a desktop or laptop computer, as you did; or

(b) create an application on the computer that accesses the NCS stick's hardware in order to accelerate the processing of a neural network. The NCS' API software provides the linkages for the application to connect with the stick hardware.

How do I get started with it once set up?

The workflow diagram refers to undertaking a 'training' phase before the NCS stick is actually used. That is the sole mention of training in the manual though, and it is not explained how to do so after that. Without being certain, I would speculate that it refers to the process of training a Convolutional Neural Network (CNN) model. The documentation implies that you need to train your own CNN model if you do not have an existing one to import.

Here's a guide on CNN that includes training information.

https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks/ A Beginner's Guide To Understanding Convolutional Neural Networks – Adit Deshpande – CS Undergrad at UCLA ('19)

0 Kudos
RTasa
New Contributor I
930 Views

To me the stick is a WAY WAY cheaper GPU accelerator for Tensorflow or Caffe.

Even using 4 of them simultaneously is cheaper than a single GPU card to accelerate inference computation.

That said the jury is still out for me, still pouring through the new docs. I am not sure its working as

quick as they claim or I had hoped.

0 Kudos
PSnip
New Contributor II
930 Views

Hi,

I am not sure if it is right place to discuss what NCS can do or what the performance is. But, as of now I will share some information here, and then restrict the conversation beyond that. We could connect on NCS developers forum for more information. I too got NCS only around 10 days back and spent a couple of days to set up the environment to see what it can. My exposure to it is very limited, but I can share what I know.

(1) What is NCS meant for: as a hardware accelerator for executing DNNs on your platform. Theoretically any platform which is running one of the supported Ubuntu version can use NCS.

(2) NCS SDK contains some tools which can compile any Caffe or TensorFlow model to run on NCS. So if you have Caffe model and weights, or TensorFlow model or weights, then NCS tools will help you to get files which can be executed on NCS. As you may already know TensorFlow support was added only recently.

(3) NCS contains Movidius Myriad2 VPU - 12 of it on each NCS. You can use multiple NCS, but each one needs to be used independently. There are some example codes on NCS forum to demonstrate it. Basically if you want to execute a particular task on 4 NCS, then you create 4 graphs, one on each NCS, and distribute your processing. Suppose you want to run inference on a 20FPS video, then you could send each frame to a different NCS, thus each one processing 5FPS. But if the task is sequential in nature, then you won't be able to utilise it so easily I think.

(4) You might have seen the examples which come with NCS SDK. The are pretty self explanatory. Apart from this there is an app zoo on git which has more examples. Though very limited, but good place to start with

https://github.com/movidius/ncappzoo GitHub - movidius/ncappzoo: Contains examples for the Movidius Neural Compute Stick.

(5) I have a gaming laptop from Dell with Nvidia GPU. I ran tiny_yolo example from darknet website on this with GPU & CUDA support enabled. On my laptop webcam I got about 30fps.

I found a tinyyolo implementation on NCS where on with same webcam I got 4-5 fps. I am not sure if it is the exact implementation of network, but still I would say that 4-5 fps is not bad. But here is the link if you want to try

https://github.com/gudovskiy/yoloNCS

(6) Also here is a paper which gives some benchmarks on NCS. But when I ran examples myself I got much better results. So could be that paper was done with older hardware version, or may be the compiler has got more efficient now.

0 Kudos
RTasa
New Contributor I
930 Views

I guess someone at Intel is listening. I was at the Movidus Site today and they mentioned they have added ROS

support and support for the Intel D series cameras. Makes me wonder when the D435 will be released or if its real close now.

Either way it was GREAT news that Intel is trying to merge things together. I am hoping they create a MyriadX stick soon which will

be much more powerful.

PSnip
New Contributor II
930 Views

Hi ChicagoBob,

Thanks for sharing the update.

I just checked webpage for the ROS NCS wrapper on git.

Looks really interesting. Will give it a shot some time in coming days.

Finally Intel seems to be providing all what developers wants.

Unified software support for NCS, RealSense, ROS really makes sense if they are planning to target Robotics Industry. It's small effort for them as they only need to it once, and they already have software teams which are well familiar with NCS and RealSense. If Intel can't do it, the effort will have to be replicated 1000 times, as each user will have to manage it of their own.

Really excited to see recent progress.

0 Kudos
RTasa
New Contributor I
930 Views

Sorry this is off topic but interesting, not sure if you are following the Intel AMD hook up.

Its more than a little interesting but no TensorFlow acceleration by AMD tha i know of.

This makes me scratch my head.

https://www.pcworld.com/article/3235934/components-processors/intel-and-amd-ship-a-core-chip-with-radeon-graphics.html Intel and AMD team up: A future Core chip will have Radeon graphics inside | PCWorld

MartyG
Honored Contributor III
930 Views

I remembered you guys were interested in Movidius products, so wanted to pass info about a new product onto those who'd took part in this discussion in the past.

https://venturebeat.com/2018/02/27/intel-makes-it-easier-to-bring-movidius-ai-accelerator-chip-into-production/ Intel makes it easier to bring Movidius AI accelerator chip into production | VentureBeat

PSnip
New Contributor II
930 Views

Thanks Marty,

Looks interesting. Not bad at USD 70. Support for Pi3 board is a great welcome, and will open up doors for other Arm based boards. I have 2 NCS sticks, but haven't got any further with it apart from running some example codes. Once I figure out the real use of it in any of our products, Aaeon board will be a good option.

0 Kudos
RTasa
New Contributor I
930 Views

Marty you seem to be in the know. I watched a video on YouTube about the 435 vs the 415 and it was quite disturbing. The 415 image looked a lot crisper

and the quality of the distance coloring seems to suffer as well. Any ideas as to why?

https://www.youtube.com/watch?v=iY8ggcmcxqo Intel RealSense D435 vs D415 - YouTube

Also went to CES and was disappointed again by Intel. They have this cool new chip combo where they have a hard buss between and AMD GPU and their CPU.

The results seemed to be awesome but alas NO mention from AMD about hardware accelerating deep learning.

I have the NCS USB and its just OK. Its not like I can attach it to a machine and run some tensorflow code and get

speed like like version that is accelerated by NVidia. Intel REALLY seems to be missing the boat.

Movidius was there with one tiny space with a YOLO demo running that looked cool.

MobileEye had one of the most useless displays I have seen. From what I have seen if I had to get something today in the AI

Deep Learning robotic space it would be NVidia. They get it. They had another AI in the car board announced at CES .

0 Kudos
MartyG
Honored Contributor III
930 Views

It's hard to determine a reason for the difference between the D435 and D415 images. There have been a few reports in this forum of worse D435 depth scanning results compared to previous RealSense cameras. In a couple of cases the users had fluorescent lights in the scanning location, which can disturb the image.

The D415 also has its RGB sensor integrated on its module board, whilst the D430 caseless module board lacks this sensor. So the RGB sensor would have to be attached another way for its inclusion in the cased D435 version of the camera.

0 Kudos
Reply