Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could you specify which exact model of the NVIDIA Jetson Orin series was used in this project? #224

Open
YueqiLuoCata opened this issue Dec 11, 2023 · 3 comments

Comments

@YueqiLuoCata
Copy link

Could you specify which exact model of the NVIDIA Jetson Orin series was used to achieve the performance metrics detailed in the Readme? :
Jetson AGX Orin 64GB
Jetson AGX Orin Industrial
Jetson AGX Orin 32GB
Jetson Orin NX 16GB
Jetson Orin NX 8GB
Jetson Orin Nano 8GB
Jetson Orin Nano 4GB

@hopef
Copy link
Collaborator

hopef commented Dec 12, 2023

Jetson AGX Orin 64GB

@guangqianzhang
Copy link

guangqianzhang commented Jan 10, 2024

there some question:

  1. if 8GB memory is qualified it, if not how many it need?
  2. Jetson Orin Nano 8GB sm=86,Whether it is qualified for the project? whether must be the AGX orin?thank!

@hopef
Copy link
Collaborator

hopef commented Jan 11, 2024

  1. if 8GB memory is qualified it, if not how many it need?
    -> That would be enough to infer. But it would help if you considered removing some of the system footprint. If you turn CUDNN/cuBLAS off on TensorRT further, you can save even more memory.

  2. Jetson Orin Nano 8GB sm=86,Whether it is qualified for the project? whether must be the AGX orin?thank!
    -> Jetson Orin Nano 8GB sm=86 is ok for infer. AGX orin isn't necessary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants