Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

Could not build graph for squeezedet. Missing link: conv1_shadow

idata
Employee
519 Views

I am trying to convert squeezedet caffemodel to run on movidius stick using the following command

 

mvNCCompile squeezedet.prototxt -w squeezeDet.caffemodel -s 12 -o graph

 

I am getting the following error

 

[Warning: 37] Output layer's name (Slice) must match its top (pred_class_probs)

 

[Error 17] Toolkit Error: Internal Error: Could not build graph. Missing link: conv1_shadow

 

Contents of squeezedet.prototxt is as follows (Only first have as the file is too long) name: "SqueezeDet" input: "data" input_shape { dim: 1 dim: 3 dim: 384 dim: 1248 } input: "conv1_shadow" input_shape { dim: 1 dim: 64 dim: 192 dim: 624 } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" param { lr_mult: 0.1 decay_mult: 0.1 } convolution_param { num_output: 64 weight_filler { type: "xavier" } pad: 2 kernel_size: 3 stride: 2 } } layer { name: "conv1_crop" type: "Crop" bottom: "conv1" bottom: "conv1_shadow" top: "conv1_crop" crop_param { axis: 1 offset: 0 offset: 1 offset: 1 } } layer { name: "con1_relu" type: "ReLU" bottom: "conv1_crop" top: "conv1_relu" } layer { name: "pool1" type: "Pooling" bottom: "conv1_relu" top: "pool1" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } # ------- Fire2 Block ------- # # (B x C x 96 x 312) layer { name: "fire2/squeeze1x1" type: "Convolution" bottom: "pool1" top: "fire2/squeeze1x1" param { lr_mult: 0.1 decay_mult: 0.1 } convolution_param { num_output: 16 weight_filler { type: "xavier" } kernel_size: 1 stride: 1 } } layer { name: "fire2/squeeze1x1_relu" type: "ReLU" bottom: "fire2/squeeze1x1" top: "fire2/squeeze1x1" } layer { name: "fire2/expand1x1" type: "Convolution" bottom: "fire2/squeeze1x1" top: "fire2/expand1x1" param { lr_mult: 0.1 decay_mult: 0.1 } convolution_param { num_output: 64 weight_filler { type: "xavier" } kernel_size: 1 stride: 1 } } layer { name: "fire2/expand1x1_relu" type: "ReLU" bottom: "fire2/expand1x1" top: "fire2/expand1x1" } layer { name: "fire2/expand3x3" type: "Convolution" bottom: "fire2/squeeze1x1" top: "fire2/expand3x3" convolution_param { num_output: 64 weight_filler { type: "xavier" } pad: 1 kernel_size: 3 stride: 1 } } layer { name: "fire2/expand3x3_relu" type: "ReLU" bottom: "fire2/expand3x3" top: "fire2/expand3x3" } layer { name: "fire2/concat" type: "Concat" bottom: "fire2/expand1x1" bottom: "fire2/expand3x3" top: "fire2/concat" } # ------- Fire3 Block ------- # layer { name: "fire3/squeeze1x1" type: "Convolution" bottom: "fire2/concat" top: "fire3/squeeze1x1" convolution_param { num_output: 16 weight_filler { type: "xavier" } kernel_size: 1 stride: 1 } } layer { name: "fire3/squeeze1x1_relu" type: "ReLU" bottom: "fire3/squeeze1x1" top: "fire3/squeeze1x1" } layer { name: "fire3/expand1x1" type: "Convolution" bottom: "fire3/squeeze1x1" top: "fire3/expand1x1" convolution_param { num_output: 64 weight_filler { type: "xavier" } kernel_size: 1 stride: 1 } } layer { name: "fire3/expand1x1_relu" type: "ReLU" bottom: "fire3/expand1x1" top: "fire3/expand1x1" } layer { name: "fire3/expand3x3" type: "Convolution" bottom: "fire3/squeeze1x1" top: "fire3/expand3x3" convolution_param { num_output: 64 weight_filler { type: "xavier" } pad: 1 kernel_size: 3 stride: 1 } } layer { name: "fire3/expand3x3_relu" type: "ReLU" bottom: "fire3/expand3x3" top: "fire3/expand3x3" } layer { name: "fire3/concat" type: "Concat" bottom: "fire3/expand1x1" bottom: "fire3/expand3x3" top: "fire3/concat" } layer { name: "pool3" type: "Pooling" bottom: "fire3/concat" top: "pool3" pooling_param { pool: MAX kernel_size: 3 stride: 2 } }

 

Does anybody know what is wrong? Anyhelp would be appreciated.

0 Kudos
1 Reply
idata
Employee
347 Views

@gopinath Looks like this model uses multiple inputs. At the moment, the current NCSDK version (2.05) doesn't have support for multiple inputs. There isn't a roadmap that I can provide for this feature at the moment.

0 Kudos
Reply