Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem when quantizing models #485

Open
CdAB63 opened this issue Jul 24, 2023 · 0 comments
Open

Problem when quantizing models #485

CdAB63 opened this issue Jul 24, 2023 · 0 comments

Comments

@CdAB63
Copy link

CdAB63 commented Jul 24, 2023

Trying to use a quantized model returns:

$ python detect_video.py --video 0 --weights ./checkpoints/yolov4-tflite-416 --framework tflite

Weights: ./checkpoints/yolov4-tflite-416
Traceback (most recent call last):
File "detect_video.py", line 125, in
app.run(main)
File "/home/ubuntu/.local/lib/python3.8/site-packages/absl/app.py", line 308, in run
_run_main(main, args)
File "/home/ubuntu/.local/lib/python3.8/site-packages/absl/app.py", line 254, in _run_main
sys.exit(main(argv))
File "detect_video.py", line 40, in main
interpreter = tf.lite.Interpreter(model_path=FLAGS.weights)
File "/home/ubuntu/.local/lib/python3.8/site-packages/tensorflow/lite/python/interpreter.py", line 464, in init
self._interpreter = _interpreter_wrapper.CreateWrapperFromFile(
ValueError: Mmap of '4' at offset '0' failed with error '19'.

Weights set with:

$ python save_model.py --weights ./data/yolov4.weights --output ./checkpoints/yolov4-416 --input_size 416 --model yolov4

and then:

python convert_tflite.py --quantize_mode int8 --weights ./checkpoints/yolov4-416 --output ./checkpoints/yolov4-416-int8.tflite
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant