Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

Tensorflow conversion toolkit error - output node not found

idata
Employee
1,619 Views

Attached is the tensorboard graph from a cnn model I have been working on. Using the below command to convert it to the NCS format:

 

 

mvNCCompile v5_detector_movidius.meta -w v5_detector_movidius -s12 -in input_map -on detection_output_1 -o v5_detector_movdius.graph

 

 

I get the following error:

 

 

mvNCCompile v02.00, Copyright @ Movidius Ltd 2016 /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py:766: DeprecationWarning: builtin type EagerTensor has no __module__ attribute EagerTensor = c_api.TFE_Py_InitEagerTensor(_EagerTensorBase) /usr/local/lib/python3.5/dist-packages/tensorflow/python/util/tf_inspect.py:45: DeprecationWarning: inspect.getargspec() is deprecated, use inspect.signature() instead if d.decorator_argspec is not None), _inspect.getargspec(target)) [Error 13] Toolkit Error: Provided OutputNode/InputNode name does not exist or does not match with one contained in model file Provided: detection_output_1:0

 

 

From the tensorboard output, it looks like the output name is correct and I'm not getting an error on the input name ("input_map"), so it looks like the model is saved correctly, but I must be doing something wrong. Any help would be appreciated.

0 Kudos
7 Replies
idata
Employee
1,135 Views

@bschulz Hi, I'd like to help you investigate this issue further. Can you provide a link to your model? Thanks.

0 Kudos
idata
Employee
1,135 Views

Thanks! The model was generated in Keras/Tensorflow. Below is the code that loads the model, removes the training parts, and saves off the cleaned .meta, .index, and .data files:

 

import numpy as np import tensorflow as tf from keras.models import Sequential, load_model, Model from keras.layers import Dropout, Flatten, Activation, Dense, Input, Embedding, LSTM, concatenate from keras.layers.convolutional import Cropping2D,SeparableConv2D,Conv2D,MaxPooling2D from keras.layers.convolutional import Convolution2D,UpSampling2D,ZeroPadding2D from keras.layers.normalization import BatchNormalization from keras.activations import softmax from keras.layers.core import Layer, Dense, Dropout, Activation, Flatten, Reshape, Permute from keras.constraints import maxnorm from keras.optimizers import SGD from keras.utils import np_utils, plot_model from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img from keras.callbacks import TensorBoard, ModelCheckpoint from keras import backend as K K.set_image_dim_ordering('tf') from PIL import Image import glob import time import cv2 import os from keras.engine.topology import Layer as Layer_inh #<--for custom layers from keras.layers.pooling import _Pooling2D from scipy.io import loadmat input_feature_map_size = 5; ##################################################### #Define network ##################################################### input_map = Input(shape=(None, None, input_feature_map_size), dtype='float32', name='input_map') enc = Conv2D(16, (3, 3), activation='relu', padding='same', name='conv2_1')(input_map) enc = BatchNormalization(name='bn_1')(enc) enc = MaxPooling2D((2,2), padding='same', name='mp_1')(enc) #output is 32x32 enc = Conv2D(16, (3, 3), activation='relu', padding='same', name='conv2_2')(enc) enc = BatchNormalization(name='bn_2')(enc) enc = MaxPooling2D((2,2), padding='same', name='mp_2')(enc) #output is 16x16 enc = Conv2D(32, (3, 3), activation='relu', padding='same', name='conv2_3')(enc) enc = BatchNormalization(name='bn_3')(enc) encoded = MaxPooling2D((2,2), padding='same', name='mp_3')(enc) #output is 8x8 dec = Conv2D(32, (3, 3), activation='relu', padding='same', name='conv2_4')(encoded) dec = BatchNormalization(name='bn_4')(dec) dec = UpSampling2D((2, 2),name='us_4')(dec) #output is 16x16 dec = Conv2D(16, (3, 3), activation='relu', padding='same', name='conv2_5')(dec) dec = BatchNormalization(name='bn_5')(dec) dec = UpSampling2D((2, 2),name='us_5')(dec) #output is 32x32 dec = Conv2D(16, (3, 3), activation='relu', padding='same',name='conv2_6')(dec) dec = BatchNormalization(name='bn_6')(dec) dec = UpSampling2D((2, 2),name='us_6')(dec) #output is 64x64 decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same',name='detection_output')(dec) autoencoder = Model(input_map, decoded) autoencoder.load_weights('contextual_detector_5_031618a.hdf5') #output tensorflow model for NCS conversion saver = tf.train.Saver() sess=K.get_session() saver.save(sess,"./models/v5_detector_movidius"); writer = tf.summary.FileWriter('./detector_graph',sess.graph)
0 Kudos
idata
Employee
1,135 Views
0 Kudos
idata
Employee
1,135 Views

@bschulz You can try using detection_output/Sigmoid as the output node. However this gives me an error: TypeError: 'NoneType' object cannot be interpreted as an integer. It seems like you are using None type for your input above. Try changing None to a constant like 1. Let me know if this works.

0 Kudos
idata
Employee
1,135 Views

Thanks. "None" is used to allow the network to take in arbitrary image sizes. If I change that to a constant, will nvNCCompile replace it with a Movidius construct to allow arbitrary input sizes? I noticed I can run the apps like TinyYOLO from the app zoo on any size images and I need to be able to do that in my application. Also, for next time, how would I know to use detection_output/Sigmoid instead of detection_output as the output node? Can I read that from the model and if so, how? Otherwise, is there another place to find the label?

0 Kudos
idata
Employee
1,135 Views

@bschulz There are a couple of ways to get the output node name in a TensorFlow model. If you have a pretrained frozen model, it is usually easier to find the input and output node names.

 

If you have the .meta, index and weights files, one method you can use is Tensorboard. Tensorboard is a tool that you can use to create a visual representation of your model. In this way you can determine the output node from the visual representation. Tensorboard information can be found at https://www.tensorflow.org/programmers_guide/graph_viz. I used this method to examine your model.

 

If you have frozen your model and have the .pb file, you can use TensorFlow's summarize_graph tool and it will attempt to guess your input and output node names. This method requires installing bazel. More information can be found at https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms.

 

Yet another method is to read in the frozen model with gfile and print out the node names and node ops. Example below:

 

import tensorflow as tf from tensorflow.python/platform import gfile filename = yourfilehere node_ops = [] with tf.gfile.GFile(filename, "rb") as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) for node in graph_def.node: print(str(node.name) + " , " + str(node.op)) # you can also use the code below instead of the print statement above # if node.op not in node_ops: # node_ops[node.op] = [] # print(node.op)

 

Hope this helps.

0 Kudos
idata
Employee
1,135 Views

It looks like freezing the graph and using the full name of the output node got the compiler working. I still have another issue with one of my node types not being supported, but I can close this thread. Thanks.

0 Kudos
Reply