Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Failure to compile a caffe model

idata
Employee
731 Views
I am trying to compile a caffe model and get the following error "Index out of range" parallels@ubuntu:~/movidius/ncsdk/examples/caffe/NetC$ mvNCCompile -w best_model.caffemodel -in input -s 12 train.prototxt mvNCCompile v02.00, Copyright @ Movidius Ltd 2016 /usr/local/lib/python3.5/dist-packages/scipy/lib/decorator.py:219: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead first = inspect.getargspec(caller)[0][0] # first arg /usr/local/lib/python3.5/dist-packages/scipy/optimize/nonlin.py:1498: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead args, varargs, varkw, defaults = inspect.getargspec(jac.__init__) /usr/local/lib/python3.5/dist-packages/scipy/stats/_distn_infrastructure.py:611: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead sign = inspect.getargspec(self._stats) /usr/local/lib/python3.5/dist-packages/scipy/stats/_distn_infrastructure.py:648: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead shapes_args = inspect.getargspec(meth) Traceback (most recent call last): File "/usr/local/bin/mvNCCompile", line 118, in <module> create_graph(args.network, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights) File "/usr/local/bin/mvNCCompile", line 101, in create_graph net = parse_caffe(args, myriad_config) File "/usr/local/bin/ncsdk/Controllers/CaffeParser.py", line 477, in parse_caffe input_bottom = net.bottom_names[inputNodeName][0] IndexError: list index out of range What is the reason of the failure? I use version 1.09.00.04. In my model I have layers which are interchangeable between TRAIN and TEST modes. ======== prototxt file ======== layer { name: "input" type: "Input" top: "data" include { phase: TEST } input_param { shape { dim: 1 dim: 3 dim: 24 dim: 24 } } } layer { name: "data" type: "ImageData" top: "data" top: "label" include { phase: TRAIN } image_data_param { source: "/home/user/WorkProjects/TrafficSign/Classifier/Models/NetC/train_image_data_path.txt" batch_size: 512 shuffle: true new_height: 0 new_width: 0 is_color: true } } layer { name: "Convolution1" type: "Convolution" bottom: "data" top: "Convolution1" convolution_param { num_output: 32 pad: 2 kernel_size: 5 stride: 2 weight_filler { type: "gaussian" std: 0.00999999977648 } } } layer { name: "BatchNorm1" type: "BatchNorm" bottom: "Convolution1" top: "Convolution1" } layer { name: "Scale1" type: "Scale" bottom: "Convolution1" top: "Convolution1" scale_param { bias_term: true } } layer { name: "block1" type: "ReLU" bottom: "Convolution1" top: "Convolution1" } layer { name: "pool1" type: "Pooling" bottom: "Convolution1" top: "pool1" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "Convolution2" type: "Convolution" bottom: "pool1" top: "Convolution2" convolution_param { num_output: 64 pad: 1 kernel_size: 3 stride: 1 weight_filler { type: "gaussian" std: 0.00999999977648 } } } layer { name: "BatchNorm2" type: "BatchNorm" bottom: "Convolution2" top: "BatchNorm2" } layer { name: "Scale2" type: "Scale" bottom: "BatchNorm2" top: "BatchNorm2" scale_param { bias_term: true } } layer { name: "block2" type: "ReLU" bottom: "BatchNorm2" top: "BatchNorm2" } layer { name: "pool2" type: "Pooling" bottom: "BatchNorm2" top: "pool2" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "Convolution3" type: "Convolution" bottom: "pool2" top: "Convolution3" convolution_param { num_output: 128 pad: 1 kernel_size: 3 stride: 1 weight_filler { type: "gaussian" std: 0.00999999977648 } } } layer { name: "BatchNorm3" type: "BatchNorm" bottom: "Convolution3" top: "BatchNorm3" } layer { name: "Scale3" type: "Scale" bottom: "BatchNorm3" top: "BatchNorm3" scale_param { bias_term: true } } layer { name: "ReLU1" type: "ReLU" bottom: "BatchNorm3" top: "BatchNorm3" } layer { name: "Convolution4" type: "Convolution" bottom: "BatchNorm3" top: "Convolution4" convolution_param { num_output: 128 pad: 1 kernel_size: 3 stride: 1 weight_filler { type: "gaussian" std: 0.00999999977648 } } } layer { name: "BatchNorm4" type: "BatchNorm" bottom: "Convolution4" top: "BatchNorm4" } layer { name: "Scale4" type: "Scale" bottom: "BatchNorm4" top: "BatchNorm4" scale_param { bias_term: true } } layer { name: "ReLU2" type: "ReLU" bottom: "BatchNorm4" top: "BatchNorm4" } layer { name: "Convolution5" type: "Convolution" bottom: "BatchNorm4" top: "Convolution5" convolution_param { num_output: 128 pad: 1 kernel_size: 3 stride: 1 weight_filler { type: "gaussian" std: 0.00999999977648 } } } layer { name: "BatchNorm5" type: "BatchNorm" bottom: "Convolution5" top: "BatchNorm5" } layer { name: "Scale5" type: "Scale" bottom: "BatchNorm5" top: "BatchNorm5" scale_param { bias_term: true } } layer { name: "block3" type: "ReLU" bottom: "BatchNorm5" top: "BatchNorm5" } layer { name: "global_pool" type: "Pooling" bottom: "BatchNorm5" top: "global_pool" pooling_param { pool: AVE global_pooling: true } } layer { name: "Convolution6" type: "Convolution" bottom: "global_pool" top: "Convolution6" convolution_param { num_output: 1024 kernel_size: 1 weight_filler { type: "gaussian" std: 0.00999999977648 } } } layer { name: "BatchNorm6" type: "BatchNorm" bottom: "Convolution6" top: "BatchNorm6" } layer { name: "Scale6" type: "Scale" bottom: "BatchNorm6" top: "BatchNorm6" scale_param { bias_term: true } } layer { name: "high_dim" type: "ReLU" bottom: "BatchNorm6" top: "BatchNorm6" } layer { name: "dropout" type: "Dropout" bottom: "BatchNorm6" top: "BatchNorm6" dropout_param { dropout_ratio: 0.5 } } layer { name: "Convolution7" type: "Convolution" bottom: "BatchNorm6" top: "Convolution7" convolution_param { num_output: 132 kernel_size: 1 weight_filler { type: "gaussian" std: 0.00999999977648 } } } layer { name: "BatchNorm7" type: "BatchNorm" bottom: "Convolution7" top: "BatchNorm7" } layer { name: "Scale7" type: "Scale" bottom: "BatchNorm7" top: "BatchNorm7" scale_param { bias_term: true } } layer { name: "score" type: "ReLU" bottom: "BatchNorm7" top: "BatchNorm7" } layer { name: "loss" type: "SoftmaxWithLoss" bottom: "BatchNorm7" bottom: "label" top: "loss" loss_weight: 1.0 include { phase: TRAIN } } layer { name: "accuracy" type: "Accuracy" bottom: "BatchNorm7" bottom: "label" top: "accuracy" top: "per_class_accuracy" include { phase: TRAIN } } layer { name: "prob" type: "Softmax" bottom: "BatchNorm7" top: "prob" include { phase: TEST } } layer { name: "argmax" type: "ArgMax" bottom: "prob" top: "argmax" include { phase: TEST } argmax_param { axis: 1 } } =========================== Thanks, Yair
0 Kudos
6 Replies
idata
Employee
455 Views

@yairh The NCS is meant to be used with models that are ready to deploy, so all layers related to training, testing, and the data layer must be removed. Layers that expect data with labels are no longer needed, because the incoming data will no longer have labels. The ArgMax layer should be removed from the prototxt file because ArgMax layers are usually computed on the host CPU. Afterwards, the rest can be done on the NCS. Additionally the deployment prototxt file should also have this declaration format at the top of the file:

 

name: "user_network"

 

input: "data"

 

input_shape {

 

dim: 1

 

dim: 3

 

dim: 24

 

dim: 24

 

}

 

Let me know if this helps!

0 Kudos
idata
Employee
455 Views

Thanks it helped :-)

0 Kudos
idata
Employee
455 Views

Hi, I got the same error when trying to compile a simple MNIST model.

 

Traceback (most recent call last):

 

File "/usr/local/bin/mvNCCompile", line 118, in

 

create_graph(args.network, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights)

 

File "/usr/local/bin/mvNCCompile", line 101, in create_graph

 

net = parse_caffe(args, myriad_config)

 

File "/usr/local/bin/ncsdk/Controllers/CaffeParser.py", line 328, in parse_caffe

 

input_bottom = net.bottom_names[inputNodeName][0]

 

IndexError: list index out of range

 

====== prototxt file ===========

 

name: "user_network"

 

input: "data"

 

input_shape {

 

dim: 1

 

dim: 1

 

dim: 28

 

dim: 28

 

}

 

layer {

 

name: "conv1"

 

type: "Convolution"

 

bottom: "data"

 

top: "conv1"

 

param {

 

lr_mult: 1

 

}

 

param {

 

lr_mult: 2

 

}

 

convolution_param {

 

num_output: 20

 

kernel_size: 5

 

stride: 1

 

weight_filler {

 

type: "xavier"

 

}

 

bias_filler {

 

type: "constant"

 

}

 

}

 

}

 

layer {

 

name: "pool1"

 

type: "Pooling"

 

bottom: "conv1"

 

top: "pool1"

 

pooling_param {

 

pool: MAX

 

kernel_size: 2

 

stride: 2

 

}

 

}

 

layer {

 

name: "conv2"

 

type: "Convolution"

 

bottom: "pool1"

 

top: "conv2"

 

param {

 

lr_mult: 1

 

}

 

param {

 

lr_mult: 2

 

}

 

convolution_param {

 

num_output: 50

 

kernel_size: 5

 

stride: 1

 

weight_filler {

 

type: "xavier"

 

}

 

bias_filler {

 

type: "constant"

 

}

 

}

 

}

 

layer {

 

name: "pool2"

 

type: "Pooling"

 

bottom: "conv2"

 

top: "pool2"

 

pooling_param {

 

pool: MAX

 

kernel_size: 2

 

stride: 2

 

}

 

}

 

layer {

 

name: "ip1"

 

type: "InnerProduct"

 

bottom: "pool2"

 

top: "ip1"

 

param {

 

lr_mult: 1

 

}

 

param {

 

lr_mult: 2

 

}

 

inner_product_param {

 

num_output: 500

 

weight_filler {

 

type: "xavier"

 

}

 

bias_filler {

 

type: "constant"

 

}

 

}

 

}

 

layer {

 

name: "relu1"

 

type: "ReLU"

 

bottom: "ip1"

 

top: "ip1"

 

}

 

layer {

 

name: "ip2"

 

type: "InnerProduct"

 

bottom: "ip1"

 

top: "ip2"

 

param {

 

lr_mult: 1

 

}

 

param {

 

lr_mult: 2

 

}

 

inner_product_param {

 

num_output: 10

 

weight_filler {

 

type: "xavier"

 

}

 

bias_filler {

 

type: "constant"

 

}

 

}

 

}

 

========= Compilation ==========

 

mvNCCompile mymodel.prototxt -w mymodel.caffemodel -s 12 -in input -on ip2 -is 28 28 -o my_net.graph

 

=============================

 

I have modified the prototxt file for the Movidius stick.

 

Thanks

 

Sahad
0 Kudos
idata
Employee
455 Views

Hi, I was able to compile by changing the input node in compilation command to "conv1" instead of "input" .

 

=========================

 

mvNCCompile ./mymodel.prototxt -w ./mymodel.caffemodel -s 12 -in conv1 -on ip2 -is 28 28 -o ./my_net.graph

0 Kudos
idata
Employee
455 Views

Unfortunately, it gives inconsistent results. I think the input node in compilation should be "input". But, there seems to have a bug when giving "input" as the input node.

0 Kudos
idata
Employee
455 Views

Sorry, my bad..

 

mvNCCompile ./mymodel.prototxt -w ./mymodel.caffemodel -s 12 -in conv1 -on ip2 -is 28 28 -o ./my_net.graph

 

,works fine.

 

I ran a VGG 19 layer pre-trained model to classify between 1000 different image classes of Imagenet, with an inference time of almost 0.98 seconds..
0 Kudos
Reply