You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched related issues but cannot get the expected help.
2. I have read the FAQ documentation but cannot get the expected help.
3. The bug has not been fixed in the latest version.
Describe the bug
I want to convert the yolov8 torch model into a TensorRT model and perform inference.
The torch model was converted to ONNX format, and the ONNX model was converted to a tensorRT model.
There were no errors during deployment and inference. However, unlike the torch and onnx models, the tensorRT model could not infer any bboxes. This seems to be because the output is bound with a size of 0, such as (1, 0, 5).
Both dynamic and static versions have problems in inference as their shapes become zero.
What modifications can be made so that the output's get_binding_shape becomes non-zero?
Checklist
Describe the bug
I want to convert the yolov8 torch model into a TensorRT model and perform inference.
The torch model was converted to ONNX format, and the ONNX model was converted to a tensorRT model.
There were no errors during deployment and inference. However, unlike the torch and onnx models, the tensorRT model could not infer any bboxes. This seems to be because the output is bound with a size of 0, such as (1, 0, 5).
Both dynamic and static versions have problems in inference as their shapes become zero.
What modifications can be made so that the output's get_binding_shape becomes non-zero?
Reproduction
configs for deploying
torch config
onnx deploy config (dynamic ver.)
TensorRT deploy config (dynamic ver.)
inference code
inference_model
called, tensor-rt model output binded as zero shapein
mmdeploy.backend.tensorrt.wrapper.py
Environment
Error traceback
The text was updated successfully, but these errors were encountered: