Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6405 Discussions

Issue with LightenedCNN running on Myriad 2 - Output results are not matching GPU's results

idata
Employee
780 Views

High level summary: we are seeing different outputs with the same input image, same network running on a GPU compared to the Movidius stick (Myriad 2). The neural network definition is coming from a GitHub project called "A Light CNN for Deep Face Representation with Noisy Labels". We would like to understand better why we have a difference and how can we fix it to have the same output values returned with the stick and the GPU.

 

1) The initial step was to compile this neural network: https://github.com/AlfredXiangWu/face_verification_experiment/blob/master/proto/LightenedCNN_C_deploy.prototxt

 

2) The Movidius parser/compiler returned an error around the slicer. Modification of the definition have been performed per post HERE

 

3) The new prototxt compiles properly after modification, see new definition in LightenedCNN_C_deploy.prototxt.txt (attached to this post).

 

4) Finally the neural network is fed with a gray scale image and output the results in movidius_op.txt. (attached to this post)

 

5) If we compare the result coming from the GPU with the same network definition and the same image as input we are not getting the same result, please compare: gnu_op.txt (attached to this post)

 

Questions:

 

a) Would it be possible to confirm this type of CNN can actually run on the Movidius stick?

 

b) Please confirm the slicing operation is supported on the Movidius stick?

 

c) Is there a tool that can be used to emulate the Movidius stick behavior on a GPU?

 

d) We are concerned about the "types" used to represent CNN parameters in the stick, we are thinking we may face data truncation during inferencing. Please comment on this.

 

//////////////////////////////////////////////////

 

Note: the input picture used for testing is attached to this post. The image went into two transformations (gray scale normalization and "reshaped") before going through the network. The Python code is below:

 

im_data = (img_gray - 127.5) * 0.0078125

 

ip_data = np.reshape(im_data, (1, 1, 128, 128))

 

The image is attached to this post as well.

 

Trained model is available upon request (big file, 126MB).

0 Kudos
2 Replies
idata
Employee
453 Views

@mickaeltoumi Your issue could be related to scaling and mean subtraction. Please visit https://movidius.github.io/ncsdk/configure_network.html for information regarding configuring your network for the NCS. Also, https://movidius.github.io/ncsdk/Caffe.html may help.

0 Kudos
idata
Employee
453 Views

Hi Tome, thank you for the answer. In both cases (GPU and Stick) the image is processed the same way.

 

Is it expected to have differences between GPU and the stick if we use the exact same network prototxt file and same input on both side?

0 Kudos
Reply