-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot use model-analyzer on ONNX classification model with dynamic input #7184
Comments
From the log it seems like the input shape needs to be specified:
I think you might need to specify the input shape as mentioned here. cc @debermudez for possible correction. |
Yes I tried but I still have the same issue... Input layer is = tensor: float32[1,3,224,224] |
Your command above does not specify the shape. Could you add a flag like If you already added the flag, can you provide the full command and new error you are seeing? |
|
Could you try this actually? I haven't used MA in a while and did not realize that |
Don't think this can be used that way...
I think it might be more straightforward if you can replicate (I provided the model file) Thanks |
Ah, it looked like that was a flag, but you can actually only submit it via the config, I believe. @nv-braf may know another way, but that looks to be what the documentation is suggesting for this exact use case. Please take a look at the documentation and see if following it resolves your dynamic input shape error. |
|
• RTX 3060
• Docker nvcr.io/nvidia/tritonserver:24.04-py3-sdk
• Cannot run model-analyzer on a model
I am currently profiling several models with
model-analyzer
.I can't manage to do this with one of my models and I'd like to have more information about the error encountered ...
Here is the error message:
Here ares the steps to reproduce:
Clone that repo and go to that directory:
https://github.com/triton-inference-server/model_analyzer
Start Triton container
docker run -it --gpus all -v /var/run/docker.sock:/var/run/docker.sock -v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start --net=host nvcr.io/nvidia/tritonserver:24.04-py3-sdk
Add this folder in the model repository:
age.zip|attachment (21.3 MB)
Run model analyis:
model-analyzer profile --model-repository /YOUR_PATH/examples/quick-start/ --profile-models age --triton-launch-mode=docker --output-model-repository-path /opt/output_dir --export-path profile_results
Thanks
The text was updated successfully, but these errors were encountered: