Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

Can I run more than one models with one stick ?

idata
Employee
1,348 Views

Hi Tom,

 

Thanks for your help and our team has run a SSD_mobilenet model on a embedded platform successfully and get accurately detetion results .

 

But we want to run two or three models at the same time ,

 

I wonder wheather it's needed to get two or three Movidius stick,

 

or if just one single Movidius_stick can manage to run three diferent models at the same time.

 

Thank you for your time and looking for your reply!

 

(because of the beautiful Movidius stick and the helpful support,

 

we have recommanded the movidius stick to our colleagues and partners )
0 Kudos
5 Replies
idata
Employee
845 Views

@zufeifei If you are using NCSDK version 2.xx, you can allocate different graph files on the same device. You will need to create input fifos and output fifos, typically one for each of your networks. You can then queue up inferences, and these inference results will be placed in the output fifos. You won't be processing each one simultaneously, but you should be able to queue up inferences in a pipeline manner.

 

Here is some python api fifo information https://movidius.github.io/ncsdk/ncapi/python_api_migration.html and here is the C++ fifo information https://movidius.github.io/ncsdk/ncapi/c_api_migration.html. Both have sample code if you need and you can visit the NCSDK 2 branch of the ncappzoo for more code samples for NCSDK 2. https://github.com/movidius/ncappzoo/tree/ncsdk2

0 Kudos
idata
Employee
845 Views

Hi @Tome_at_Intel . Is there an example showing the loading of multiple graphs on a single ncs ?

0 Kudos
idata
Employee
844 Views

@karthik The birds example in the ncappzoo is a good example to review for using two models with one device. In this case the models being used are Tiny Yolo V1 and GoogLeNet.

 

The typical workflow is to open the device like you normally would and then load the graph files to the device and create a input fifo and output fifo for each network. This can be seen in lines 449-458 of the birds app.

 

In line 497 the app queues up an inference for the Tiny Yolo model. The next line reads from the Tiny Yolo output fifo to get the result from that inference.

 

get_googlenet_classifications() is called afterwards and in lines 398-399, you can see where the same process happens for the GoogLeNet model using the GoogLeNet input and output fifos.

 

Also remember the clean up at the end.

 

Hope this helps.

0 Kudos
idata
Employee
845 Views

Very nice and useful answer about how to use two models at the same time, thank you @Tome_at_Intel

0 Kudos
idata
Employee
845 Views

Interesting and cost saving

 

With single NCS stick we can push multiple models into NCS and run all models parallelly. :)

0 Kudos
Reply