Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

[Error 25] Myriad Error: "Major or Minor Slices of MatMul are zero".

idata
Employee
796 Views
name: "j" layer { name: "data" type: "Input" top: "data" input_param { shape { dim: 1 dim: 3 dim: 224 dim: 224 } } } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" convolution_param { num_output: 64 weight_filler { type: "xavier" } bias_filler { type: "constant" } pad_h: 4 pad_w: 4 kernel_h: 7 kernel_w: 7 stride_h: 2 stride_w: 2 } } layer { name: "relu2" type: "ReLU" bottom: "conv1" top: "conv1" } layer { name: "pool3" type: "Pooling" bottom: "conv1" top: "pool3" pooling_param { pool: MAX kernel_h: 3 kernel_w: 3 stride_h: 3 stride_w: 3 pad_h: 0 pad_w: 0 } } layer { name: "conv4" type: "Convolution" bottom: "pool3" top: "conv4" convolution_param { num_output: 128 weight_filler { type: "xavier" } bias_filler { type: "constant" } pad_h: 1 pad_w: 1 kernel_h: 2 kernel_w: 2 stride_h: 1 stride_w: 1 } } layer { name: "relu5" type: "ReLU" bottom: "conv4" top: "conv4" } layer { name: "conv6" type: "Convolution" bottom: "conv4" top: "conv6" convolution_param { num_output: 128 weight_filler { type: "xavier" } bias_filler { type: "constant" } pad_h: 0 pad_w: 0 kernel_h: 2 kernel_w: 2 stride_h: 1 stride_w: 1 } } layer { name: "relu7" type: "ReLU" bottom: "conv6" top: "conv6" } layer { name: "conv8" type: "Convolution" bottom: "conv6" top: "conv8" convolution_param { num_output: 128 weight_filler { type: "xavier" } bias_filler { type: "constant" } pad_h: 0 pad_w: 0 kernel_h: 2 kernel_w: 2 stride_h: 1 stride_w: 1 } } layer { name: "relu9" type: "ReLU" bottom: "conv8" top: "conv8" } layer { name: "conv10" type: "Convolution" bottom: "conv8" top: "conv10" convolution_param { num_output: 128 weight_filler { type: "xavier" } bias_filler { type: "constant" } pad_h: 0 pad_w: 0 kernel_h: 2 kernel_w: 2 stride_h: 1 stride_w: 1 } } layer { name: "relu11" type: "ReLU" bottom: "conv10" top: "conv10" } layer { name: "pool12" type: "Pooling" bottom: "conv10" top: "pool12" pooling_param { pool: MAX kernel_h: 2 kernel_w: 2 stride_h: 2 stride_w: 2 pad_h: 0 pad_w: 0 } } layer { name: "conv13" type: "Convolution" bottom: "pool12" top: "conv13" convolution_param { num_output: 256 weight_filler { type: "xavier" } bias_filler { type: "constant" } pad_h: 1 pad_w: 1 kernel_h: 2 kernel_w: 2 stride_h: 1 stride_w: 1 } } layer { name: "relu14" type: "ReLU" bottom: "conv13" top: "conv13" } layer { name: "conv15" type: "Convolution" bottom: "conv13" top: "conv15" convolution_param { num_output: 256 weight_filler { type: "xavier" } bias_filler { type: "constant" } pad_h: 1 pad_w: 1 kernel_h: 2 kernel_w: 2 stride_h: 1 stride_w: 1 } } layer { name: "relu16" type: "ReLU" bottom: "conv15" top: "conv15" } layer { name: "conv17" type: "Convolution" bottom: "conv15" top: "conv17" convolution_param { num_output: 256 weight_filler { type: "xavier" } bias_filler { type: "constant" } pad_h: 0 pad_w: 0 kernel_h: 2 kernel_w: 2 stride_h: 1 stride_w: 1 } } layer { name: "relu18" type: "ReLU" bottom: "conv17" top: "conv17" } layer { name: "conv19" type: "Convolution" bottom: "conv17" top: "conv19" convolution_param { num_output: 256 weight_filler { type: "xavier" } bias_filler { type: "constant" } pad_h: 0 pad_w: 0 kernel_h: 2 kernel_w: 2 stride_h: 1 stride_w: 1 } } layer { name: "relu20" type: "ReLU" bottom: "conv19" top: "conv19" } layer { name: "pool21" type: "Pooling" bottom: "conv19" top: "pool21" pooling_param { pool: MAX kernel_h: 3 kernel_w: 3 stride_h: 3 stride_w: 3 pad_h: 0 pad_w: 0 } } layer { name: "conv22" type: "Convolution" bottom: "pool21" top: "conv22" convolution_param { num_output: 2304 weight_filler { type: "xavier" } bias_filler { type: "constant" } pad_h: 0 pad_w: 0 kernel_h: 2 kernel_w: 2 stride_h: 1 stride_w: 1 } } layer { name: "relu23" type: "ReLU" bottom: "conv22" top: "conv22" } layer { name: "conv24" type: "Convolution" bottom: "conv22" top: "conv24" convolution_param { num_output: 256 weight_filler { type: "xavier" } bias_filler { type: "constant" } pad_h: 1 pad_w: 1 kernel_h: 2 kernel_w: 2 stride_h: 1 stride_w: 1 } } layer { name: "relu25" type: "ReLU" bottom: "conv24" top: "conv24" } layer { name: "fc26" type: "InnerProduct" bottom: "conv24" top: "fc26" inner_product_param { num_output: 4096 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "relu27" type: "ReLU" bottom: "fc26" top: "fc26" } layer { name: "fc28" type: "InnerProduct" bottom: "fc26" top: "fc28" inner_product_param { num_output: 4096 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "relu29" type: "ReLU" bottom: "fc28" top: "fc28" } layer { name: "output" type: "InnerProduct" bottom: "fc28" top: "output" inner_product_param { num_output: 1000 weight_filler { type: "xavier" } bias_filler { type: "constant" } } }
0 Kudos
4 Replies
idata
Employee
509 Views

@csarron We are still investigating the root cause but I have a work around for you in the mean time. Here are the steps to create a conf file that should be placed in the same directory as your prototxt file.

 

1) Create a new text file named "j.conf"

 

2) Add the following lines to the "j.conf" text file:

 

conv15

 

im2col_v2

 

conv24

 

im2col_v2

 

3) Save the file and place it in the same directory as your "j.prototxt". You should be able to use this network as long as the conf file accompanies it.

0 Kudos
idata
Employee
509 Views
@Tome_at_Intel Thank you for the reply. I tried the conf workaround as you said, still gave the same error, I also got the following log: /usr/local/bin/ncsdk/Controllers/FileIO.py:52: UserWarning: You are using a large type. Consider reducing your data sizes for best performance "Consider reducing your data sizes for best performance\033[0m") [Error 25] Myriad Error: "Major or Minor Slices of MatMul are zero". mvNCProfile v02.00, Copyright @ Movidius Ltd 2016 0 0x80000000 Layer conv1 use the generic optimisations which is: 0x80000000 0 0x80000000 Layer pool3 use the generic optimisations which is: 0x80000000 0 0x80000000 Layer conv4 use the generic optimisations which is: 0x80000000 0 0x80000000 Layer conv6 use the generic optimisations which is: 0x80000000 0 0x80000000 Layer conv8 use the generic optimisations which is: 0x80000000 0 0x80000000 Layer conv10 use the generic optimisations which is: 0x80000000 0 0x80000000 Layer pool12 use the generic optimisations which is: 0x80000000 0 0x80000000 Layer conv13 use the generic optimisations which is: 0x80000000 Spec opt found opt_conv_im2col_v2 1<< 2 Layer (a) conv15 use the optimisation mask which is: 0x4 0 0x80000000 Layer conv17 use the generic optimisations which is: 0x80000000 0 0x80000000 Layer conv19 use the generic optimisations which is: 0x80000000 0 0x80000000 Layer pool21 use the generic optimisations which is: 0x80000000 0 0x80000000 Layer conv22 use the generic optimisations which is: 0x80000000 Spec opt found opt_conv_im2col_v2 1<< 2 4 0x80000004 Layer conv24 use the generic optimisations which is: 0x80000004 0 0x80000000 Layer fc26 use the generic optimisations which is: 0x80000000 0 0x80000000 Layer fc28 use the generic optimisations which is: 0x80000000 0 0x80000000 Layer output use the generic optimisations which is: 0x80000000 USB: Transferring Data... [Error 25] Myriad Error: "Major or Minor Slices of MatMul are zero".
0 Kudos
idata
Employee
509 Views

@Tome_at_Intel , given the optimization configuration you mentioned, can you share more about how to configure the optimization mask if possible?

0 Kudos
idata
Employee
509 Views

@csarron Make sure you are using the latest version of the SDK and please try this again. Thanks.

0 Kudos
Reply