Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

int8 engine generation failed #163

Open
adonishong opened this issue Dec 23, 2021 · 2 comments
Open

int8 engine generation failed #163

adonishong opened this issue Dec 23, 2021 · 2 comments

Comments

@adonishong
Copy link

Hi, highly appreciate for your work.

Device: AGX Xavier
OS: Ubuntu 18.04.6 LTS
Jetpack 4.6, CUDA 10.2, CUDNN 8.2.1, TensorRT 8.0.1.6

We have tried FP32 FP16 for yolov5-6.0 n/s/l/m/x and yolov3 also; this project works perfectly for these mode. But int8 mode does not work, we get this error when we do int8 mode for yolov5-6.0 n/s/m/l/x, yolv3 also

ERROR: 2: [reformatRunner.cpp::onShapeChangeNONCONST::104] Error Code 2: Internal Error (Assertion mCombinedScalesSize >= channelDst failed.)
yolo-trt: /home/nvidia/Projects/yolo-tensorrt/modules/yolo.cpp:488: void Yolo::createYOLOEngine(nvinfer1::DataType, Int8EntropyCalibrator*): Assertion `m_Engine != nullptr' failed.
Aborted (core dumped)

@wangxudong-cq
Copy link

how is it now

@Lenan22
Copy link

Lenan22 commented Oct 31, 2022

Hi, highly appreciate for your work.

Device: AGX Xavier OS: Ubuntu 18.04.6 LTS Jetpack 4.6, CUDA 10.2, CUDNN 8.2.1, TensorRT 8.0.1.6

We have tried FP32 FP16 for yolov5-6.0 n/s/l/m/x and yolov3 also; this project works perfectly for these mode. But int8 mode does not work, we get this error when we do int8 mode for yolov5-6.0 n/s/m/l/x, yolv3 also

ERROR: 2: [reformatRunner.cpp::onShapeChangeNONCONST::104] Error Code 2: Internal Error (Assertion mCombinedScalesSize >= channelDst failed.) yolo-trt: /home/nvidia/Projects/yolo-tensorrt/modules/yolo.cpp:488: void Yolo::createYOLOEngine(nvinfer1::DataType, Int8EntropyCalibrator*): Assertion `m_Engine != nullptr' failed. Aborted (core dumped)

Please refer to our open source quantization tool ppq, the quantization effect is better than the calibration of tensorrt, if you encounter issues, we can help you solve them.
https://github.com/openppl-public/ppq/blob/master/md_doc/deploy_trt_by_OnnxParser.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants