Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

converting to tfjs model #286

Open
m-ameri opened this issue May 20, 2023 · 0 comments
Open

converting to tfjs model #286

m-ameri opened this issue May 20, 2023 · 0 comments

Comments

@m-ameri
Copy link

m-ameri commented May 20, 2023

Hello
I need to use this lightweight model as a tfjs model for face detection task.
So I tried to convert the RFB version of model (from 'tf/export_models/RFB') to a tfjs model using the following command:

tensorflowjs_converter \ --input_format=tf_saved_model \ --output_format=tfjs_graph_model \ --signature_name=serving_default \ --saved_model_tags=serve \ /saved_model \ /tfjs_model

the model is converted successfully with a couple of warnings:

2023-05-20 14:19:52.560333: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. WARNING:root:TensorFlow Decision Forests 1.2.0 is compatible with the following TensorFlow Versions: ['2.11.0']. However, TensorFlow 2.11.1 was detected. This can cause issues with the TF API and symbols in the custom C++ ops. See the TF and TF-DF compatibility table at https://github.com/tensorflow/decision-forests/blob/main/documentation/known_issues.md#compatibility-table. ... WARNING:tensorflow:Didn't find expected Conv2D or DepthwiseConv2dNative input to 'StatefulPartitionedCall/functional_1/basenet.7.branch0.2_bn/FusedBatchNormV3' ...

and when trying to use the tfjs model.predict() after loading it as a graph model (model = await tf.loadGraphModel('./model.json')), following error is raised:

Error: This execution contains the node 'StatefulPartitionedCall/functional_1/tf_op_layer_NonMaxSuppressionV3/NonMaxSuppressionV3', which has the dynamic op 'NonMaxSuppressionV3'. Please use model.executeAsync() instead. Alternatively, to avoid the dynamic ops, specify the inputs [StatefulPartitionedCall/functional_1/tf_op_layer_GatherV2_1/GatherV2_1]

Is there a way to fix this so that i can use the tfjs model?

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant