Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

about PReLU、BatchNorm、Scale Layer of deploy.prototxt (Caffe transform to Movidius)

idata
Employee
809 Views

I use Caffe framework and do this mvNCProfile command below to transform caffe deploy.prototxt to Movidius

 

mvNCProfile deploy.prototxt -s 12

 

I found that if the PReLU、BatchNorm、Scale Layer use the same "bottom" and "top" name, it will produce the wrong result.

 

otherwise, ReLU can use the same "bottom" and "top" name.

  • Example 1. deploy.prototxt - use the same "bottom" and "top" name

 

name: "Net V1.03" input: "data" input_shape { dim: 1 dim: 3 dim: 200 dim: 200 } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" convolution_param { num_output: 6 kernel_size: 5 stride: 2 pad: 2 weight_filler { type: "xavier" } } } layer { name: "relu_conv1" type: "PReLU" bottom: "conv1" top: "conv1" } layer { name: "bn_conv1" type: "BatchNorm" bottom: "conv1" top: "conv1" batch_norm_param { use_global_stats: true } include { phase: TEST } } layer { name: "scale_conv1" type: "Scale" bottom: "conv1" top: "conv1" scale_param { bias_term: true } } layer { name: "conv2" type: "Convolution" bottom: "conv1" top: "conv2" convolution_param { num_output: 12 kernel_size: 3 pad: 1 weight_filler { type: "xavier" } } } layer { name: "relu_conv2" type: "ReLU" bottom: "conv2" top: "conv2" } layer { name: "pool_final" type: "Pooling" bottom: "conv2" top: "pool_final" pooling_param { pool: AVE global_pooling: true } }

 

 

=> The convert result of Movidius

 

drive.google.com/file/d/1MmzeQfGwrbz_P3eB8CuJfiR1TuCGhLhV/view?usp=sharing

 

The connect of layers is wrong, relu_conv1 layer is independent outside.

 

(The right connect is conv1 -> relu_conv1 -> bn_conv1 -> scale_conv1 -> conv2 -> …)

 

 

  •  Example 2. deploy.prototxt - **use different "bottom" and "top" name **

 

 

name: "Net V1.03" input: "data" input_shape { dim: 1 dim: 3 dim: 200 dim: 200 } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" convolution_param { num_output: 6 kernel_size: 5 stride: 2 pad: 2 weight_filler { type: "xavier" } } } layer { name: "relu_conv1" type: "PReLU" bottom: "conv1" top: "relu_conv1" } layer { name: "bn_conv1" type: "BatchNorm" bottom: "relu_conv1" top: "bn_conv1" batch_norm_param { use_global_stats: true } include { phase: TEST } } layer { name: "scale_conv1" type: "Scale" bottom: "bn_conv1" top: "scale_conv1" scale_param { bias_term: true } } layer { name: "conv2" type: "Convolution" bottom: "scale_conv1" top: "conv2" convolution_param { num_output: 12 kernel_size: 3 pad: 1 weight_filler { type: "xavier" } } } layer { name: "relu_conv2" type: "ReLU" bottom: "conv2" top: "conv2" } layer { name: "pool_final" type: "Pooling" bottom: "conv2" top: "pool_final" pooling_param { pool: AVE global_pooling: true } }

 

 

=> The convert result of Movidius:

 

https://drive.google.com/file/d/1FBsXzyOasqPvtL9q6Oifxsi_bblQU-p8/view?usp=sharing

 

When I chang the "top" and "bottom" name different, the connect of layers looks like right.

 

 

If "changing the top and bottom name different" is the right way to solve this situation? or anybody have other suggestion?

0 Kudos
1 Reply
idata
Employee
536 Views

@miily This seems to be the way to do it from what I've been seeing.

0 Kudos
Reply