Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert ONNX to ENGINE file failure of TensorRT 8.6 when running the command "deepstream-app -c deepstream_app_config.txt" on GPU NVIDIA ORIN NX 64GB #3832

Open
bravewhh opened this issue Apr 28, 2024 · 6 comments
Assignees
Labels
triaged Issue has been triaged by maintainers

Comments

@bravewhh
Copy link

Description

When I used the steps provided by Deepstream-YOLO to convert the YOLOv5n model from ONNX to Engine, I received the following error message:
47991ac0b810cfb1bd232ec1f8c29bb0

Running Environment

deepstream-app version 6.4.0
DeepStreamSDK 6.4.0
CUDA Driver Version: 12.2
CUDA Runtime Version: 12.2
TensorRT Version: 8.6
cuDNN Version: 8.9
libNVWarp360 Version: 2.0.1d3
TensorRT Version:8.6
NVIDIA GPU: NVIDIA Corporation Device 229e (rev a1)
NVIDIA Driver Version:540.2.0

Operating System:jetson(apply jetpack 6.0)

Running command

python3 export_yoloV5.py -w yolov5s.pt --dynamic
cd Deepstream-yolo
CUDA_VER=12.2 make -C nvdsinfer_custom_impl_Yolo
sudo deepstream-app -c deepstream_app_config.txt

Others

I refer to the issue #2535 and try to generate the onnx model by "python3 export_yoloV5.py -w yolov5s.pt --dynamic --simplify" and "python3 export_yoloV5.py -w yolov5s.pt",but the above question has already exists.......

Please help me to solve the question,thank you so... much !!!

@lix19937
Copy link

try to use use fixed shape python3 export_yoloV5.py -w yolov5s.pt --simplify .

@bravewhh
Copy link
Author

try to use use fixed shape python3 export_yoloV5.py -w yolov5s.pt --simplify .

I have tried this and the answer is not work.the above question has also existed.

@lix19937
Copy link

lix19937 commented Apr 30, 2024

  1. Is it can run x86_64 env ? (with the same nv-cuda-trt version)

  2. You can try to use the latest trt version.

@bravewhh
Copy link
Author

  1. Is it can run x86_64 env ? (with the same nv-cuda-trt version)
  2. You can try to use the latest trt version.

sorry,I dont have the x86_64 env with cuda, now the tensorrt running on the jetson env,when i flash the jetpack it include the tensorrt 8.6...

@zerollzeng
Copy link
Collaborator

Looks like a dup of #2535

@zerollzeng
Copy link
Collaborator

#2535 (comment)

@zerollzeng zerollzeng self-assigned this May 3, 2024
@zerollzeng zerollzeng added the triaged Issue has been triaged by maintainers label May 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triaged Issue has been triaged by maintainers
Projects
None yet
Development

No branches or pull requests

3 participants