Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

Multiple output nodes for mvNCCompile and tf object detection

idata
Employee
1,060 Views

Hi,

 

I wonder to compile a trained tensorflow model for object detection (based on tensorflow/models: models/research/object_detection). Such model has a multiple outputs: boxes, scores, classes, number of detections.

 

Is it possible to pass multiple parameters for the -on key for the mvNCCompile?

 

Is there any advice on using tf object detection models?

 

Thank you
0 Kudos
10 Replies
idata
Employee
727 Views

@ostfor As of NCSDK 1.12, the -on option for mvNCCompile only takes one output node. For TensorFlow object detection, currently there is only one network that I know of that works with the NCSDK: Tiny Yolo V2. You can generate a model for use with an NC device by following the steps at https://ncsforum.movidius.com/discussion/comment/2161/#Comment_2161.

0 Kudos
idata
Employee
727 Views

Hi @Tome_at_Intel, I'm trying to do a similar thing. I've seven outputs from my model and I don't know how to transform my .pb file into the graph one for the Movidius neural compute stick.

 

If it's possible for Yolo, I can't understand why is not possible to use for any other detector. Also Yolo has multiple outputs.

 

I've followed the link you provided but there's no code to generate the Yolo's graphs.

 

Thank you.
0 Kudos
idata
Employee
727 Views

@EscVM To compile a NCS compatible graph file, you must have the NCSDK installed and use the mvNCCompile tool to compile a graph file for your model. For example: mvNCCompile my_model.pb -s 12. For more information you can visit https://movidius.github.io/ncsdk/tools/compile.html.

 

As for TensorFlow object detectors, there are still some TensorFlow operations that are not supported yet which are required by these object detector models.

0 Kudos
idata
Employee
727 Views

Thank @Tome_at_Intel for your reply.

 

I know that I've to use the mvNCCompile my_model.pb -s 12 command, but for my application the architecture has seven output nodes. As far as I know, mvNCCompile can't handle that. However, Yolo has several outputs too, so I'd like to know how has been possible to generate the graph, in that case, for the neural compute stick.

0 Kudos
idata
Employee
727 Views

@EscVM Although Yolo has multiple outputs like bounding box coordinates, scores, classes, etc, it shouldn't an issue. The graph file should still generate and when running an inference with your application, it will return an array with all of the information you need (num objects detected, bounding box coordinates, scores, etc.).

0 Kudos
idata
Employee
727 Views

Is there an example code of how has been generated the graph of Yolo using the NCSD toolkit? Thanks

0 Kudos
idata
Employee
727 Views

@EscVM Here is an example using Tiny Yolo v1 on Caffe. https://github.com/movidius/ncappzoo/blob/master/caffe/TinyYolo/run.py. The results with all of the output can be interpreted as follows in my post at https://ncsforum.movidius.com/discussion/comment/2172/#Comment_2172.

0 Kudos
idata
Employee
727 Views

I'm try to use AIY VisionKit (VisionBonnet) which is designed by google, and the solution is Myraid2.

 

The Vision Bonnet seems like support tensorflow ssd mobilenetV1, and can be output multiply. (I already run the demo, but still some issues with my own model).

 

Maybe there have some experience shared. @Tome_at_Intel
0 Kudos
idata
Employee
727 Views

I have the same issue, i cant find whe syntax to fill the -on argument with multiples outpus, i working with faster rcnn model and it has 4 outputs. i need something like this: mvNCCompile -s 12 frozen_inference_graph.pb -in=image_tensor -on=[Layer1, Layer2, Layer3, Layer4] (this is just a example, it doesnt work!).

0 Kudos
idata
Employee
727 Views

i was thinking of modifying the graph (pb) file like this:

 

collecting all the outputs from all required nodes..creating a tensor as final output. This tensor can be fed as output layer to mvNCCompile

 

not sure if it works.

 

i wanted to check if one of the node(out of detection_boxes, detection_scores,num_detections,detection_classes) compiles with mnNCCompile [tensorflow mobileNet_SSD]

 

mvNCCompile model.ckpt.meta -s 12 -in=image_tensor -on=num_detections -o output.graph

 

_Traceback (most recent call last):

 

File "/usr/local/bin/mvNCCompile", line 118, in

 

create_graph(args.network, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights)

 

File "/usr/local/bin/mvNCCompile", line 104, in create_graph

 

net = parse_tensor(args, myriad_config)

 

File "/usr/local/bin/ncsdk/Controllers/TensorFlowParser.py", line 259, in parse_tensor

 

input_data = np.random.uniform(0, 1, shape)

 

File "mtrand.pyx", line 1307, in mtrand.RandomState.uniform

 

File "mtrand.pyx", line 242, in mtrand.cont2_array_sc

 

TypeError: 'NoneType' object cannot be interpreted as an integer_

 

how to solve this error?

 

second best alternative is to convert tensorflow model to caffe and then pass it to mnNCCompile.

 

I don't know why there is poor support to tensorflow!

 

anyway, was anybody able to compile mobilenet_SSD tensorflow with movidius SDK ?

 

I'm just asking cause its easier to **retrain **tensorflow mobilenet_SSD to detect custom objects

0 Kudos
Reply