- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello, I am trying to train the facenet model with my own datasets and compile the trained model to generate a graph file.
Before I do this, I tried to compile the pre-trained model download by "get_zipped_facenet_model.sh". I can generate a graph file
from the downloaded pre_train model and run the run.py in the facenet of ncappzoo normally. When I compile the facenet model trained on
my own with the same parameter, I will get the "[Error 13] Toolkit Error: Provided OutputNode/InputNode name does not exist or does
not match with one contained in model file Provided: output:0". How can I resolve this problem? what parameter I need, when training the
facenet model with my own datasets? How can I check the OutputNode/InputNode name is correct or not. My trained model contains
files "XXX.meta XXX.index XXX.data-00000-of-00001". It is the same with downloaded model.
any suggestion?
thanks.
- Tags:
- Keras
- Raspberry Pi
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Crawford Your trained model probably does not have the needed input and output nodes. If you look the convert_facenet.py script, you can see that they are added to the graph and the graph resaved. The Process of converting the TensorFlow model to a Movidius graph file described here. You can look at the configuration of the graph using a TensorBoard graph visualization.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for replies, I follow the Compilation Guidance in the link, but I do not know how to generate the "inception_v3.pb" in the example.
How can generate the inference graph? Is there a script in facenet for inference graph. If I need to write a script by myself, where can I find
some examples?
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Crawford
If you want to easily check INPUT and OUTPUT layers, it is easy to use the following script, for example.
https://github.com/PINTO0309/MobileNet-SSD-RealSense/blob/master/tfconverter.py
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for your suggestion PINTO, I have already converted the pb file to pbtxt, and found the output node as "embeddings".
I used following command to generate NCS graph file.
mvNCCompile facenet.pb -in=batch_size -on=embeddings -o facenet.graph
Now the Error 13 does not appear anymore, but I got another error.
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'phase_train' with dtype bool
[[Node: phase_train = Placeholder[dtype=DT_BOOL, shape=, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
I tried to modified the shape of 'phase_train', but it is not worked.
Can anyone tell me how to fix the error?
Thanks again.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Any other suggestions? im on the same, i finetune yolov3 in keras for using my own dataset. I got the .h5 and .json of the trainned model. I tested the model in my laptop so everything is as is meant to be. I just need to put the model in the raspberry pi 3.
I configurate the raspberry with the ncapi framework and use the live-object-detector demo. everything ok with that too.
I am now in my journey to use my on yolov3 model in the movidius. I transform the keras output .h5 file to a .pb file. When using "mvNCCompile -s 12 trainned.pb -in=input_1 -on=out_1/out_2/out_3" in ubuntu 16.04LTS with 8gbs i got the ERROR13 saying outputs does not exist or does not match.
Any help with this?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Crawford Looks like you still have some training layers in the model. Can you try using TensorFlow's create_inference_graph function to remove any training related layers?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@efmoran
hey, I'm also working on yolov3 now, using tf.slim. I got the same error 13 before, just because I took some mistake on the name scope. For example, my output node is in name scope -"yolo" , so it's wrong to do "mvNCCheck xxx.meta -on output", instead the command should be "mvNCCheck xxx.meta -on yolo/output".
SO I think maybe you should check your namescope and the name of your output node. Good luck!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Tome_at_Intel
Hi, please, I just want to know if the error 5 (Toolkit Error: Stage Details Not Supported:ResizeNearestNeighbor) means for now the ncapi doesn't support it ?
Could you please give me some advice to do upsampling on movidius?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@FREEMAN123 Yes, that's what it means. For TensorFlow there isn't any support for upsampling/deconvolution or transposed convolution at the moment. For Caffe, you can use the deconvolution layer.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Tome_at_Intel Thank you! It seems that caffe is the only possible solution now..
I tried the solutions in https://ncsforum.movidius.com/discussion/comment/3039/#Comment_3039 , just got the same error as @tripleplay369 and @huchhong got one year ago. I'm using NCAPI2, but there's still the concat issue. Is there any latest release having solved this problem now?
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page