Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TensorFlow 2 Models for Jetson Nano #86

Open
franferraz98 opened this issue Nov 25, 2021 · 0 comments
Open

TensorFlow 2 Models for Jetson Nano #86

franferraz98 opened this issue Nov 25, 2021 · 0 comments

Comments

@franferraz98
Copy link

Hi,

I'm currently working on an AI project using an NVIDIA Jetson Nano (4GB) and TensorFlow 2 where we were planning on using a Faster R-CNN Inception ResNet V2 640x640 model. We tried using TF-TRT to reduce the network, but it seems to be too big to fit in, the vRAM memory is not big enough and using Swap doesn't solve the issue.

We have done several tests and, for the moment, the heaviest network from the TensorFlow Model Zoo we managed to get working is the SSD MobileNet V2 FPNLite 640x640.

I've been searching for a list of networks that have been tested on this device for TF2, but I can't seem to find it. I know of the existence of this list, but it is for TF1 and doesn't involve the TF2 Model Zoo models.

Is there any chance that a list of working models with their speed, memory usage and mAP tested on a common dataset (COCO or similar) will be developed? This would be specially interesting regarding the different methods available to deploy a model, either via CPU, optimized CPU with TFLite, GPU, optimized GPU with TF-TRT, fully optimized with pure TensorRT or any other possible option that I haven't considered.

It seems to be a somewhat-common issue, all non-experts feel a bit lost, me included.

Thank you in advance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant