Run Model Maker Object Detection TFLite model inference directly #5370
Labels
platform:python
MediaPipe Python issues
stat:awaiting googler
Waiting for Google Engineer's Response
task:object detection
Issues related to Object detection: Track and label objects in images and video.
type:modelmaker
Issues related to creation of custom on-device ML solutions
type:support
General questions
I've trained a model in mediapipe model maker.
I want to run inference through tensorflow directly, on python, so I can use a Coral Edge TPU. Since it's a TFLite Model this should be possible.
But I'm struggling to get proper outputs.
For input, I resize to 256x256. I've tried normalisation in [0,255], [0,1] and [-1,1].
Running the signature function returns a dictionary of {detection_boxes, detection_scores}
Where
shape(detection_boxes) = (1, num_boxes, 4)
and
shape(detection_scores) = (1, num_boxes, num_classes)
However, the values I'm getting for detection_boxes are unnormalised and frequently negative.
I've tried searching the repo for how decoding is done, and expected pre-processing on input, but its hard to traverse this repo.
Is there a minimal example of how to perform inference directly, and decode model output?
Failing that, what model input is expected and what format are the output
detection_boxes
anddetection_score
?(Code: https://gist.github.com/DoctorDinosaur/be495b6065fff29f79ec11306dd89c3b)
The text was updated successfully, but these errors were encountered: