-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I convert onnx to ncnn successfully, but all my inference is all nan. Eg, the output of net.extract() is all nan. #5442
Comments
What is the result when transfering the model into .param & .bin. Some op not support? I check the output from different output layer and find it prints "NAN" after some middle layers, but I cant locate it. So maybe unsupport op exist, can you upload the original model file (be like .onnx) so I can check the model structure further. |
Thank you for your reply!
I tried using numpy instead of Pytorch. The inference result was not completely black, but it was not normal either. |
Hello!
2、Your own LayerNorm2d_Sc works the same with the original one. If your own LayerNorm2d_Sc works but fails in transfering to ncnn model. Maybe you can update the ncnn version and compile the layernorm operation (see #5262 (comment) for detail). Could you post the error message? |
And for |
|
I do these operations for getting ncnn: Pytorch model --> onnxsim --> ncnn. But I got "LayerNormalization not supported yet!" when turning it to ncnn
The number of errors reported may correspond to the number of custom LayerNorm operations.
And execute the command under ncnn/build:
|
Haha I got "LayerNormalization not supported yet!" when turning it to ncnn too. |
I added the LayerNorm implementation of ncnn, why is it still not supported? It feels like the conversion process does not call ncnn’s LayerNorm. |
1、I didn't try to register own op, but i think it should be a individual .h & .cpp file to declare the class |
I have tried to supplement the LayerNorm implementation in ncnn, added the LayerNormalization implementation according to the reference document add custom layer and recompiled. 1.LayerNorm in ncnn surpports normalization by channel dim: |
if you edit the file LayerNorm.cpp the op is still called LayerNorm, but the custom op is callled LayerNormalization according to "LayerNormalization not supported yet!" , so maybe u should declare a new op calss. |
I know what you mean. I write two files named "LayerNormalization.h" and "LayerNormalization.cpp", and modified src/CMakeLists.txt with ncnn_add_layer(LayerNormalization), then compile it again. But it doesn't seem to work. |
yeah, i got the same situation, but dont know why it didn't work |
Help, please. @nihui |
Thanks again, I won't give up and solve this problem sooner or later. I must turn to ncnn, as it's perfect in my view. |
I used PNNX to resolve my trouble in the end! |
error log | 日志或报错信息 | ログ
context | 编译/运行环境 | バックグラウンド
Ubuntu 18.04.6 LTS
ncnn-20240410-android-shared
android-ndk-r17c
how to reproduce | 复现步骤 | 再現方法
1.Just run
./sc_ncnn img.jpg
with adb shell on Android.2.I try to excute
net.extract("x_lr_A", A);
in C++, and A is all nan. It depressed me...pre_process:
1.cv.imread
2.resize(512,512)
3.hwc-->chw, maybe wrong, but shouldn't get nan.
4.normalize [0,1] maybe not right, but shouldn't get nan.
I'm new to ncnn, but curious about the extreme mobile performance of ncnn, so I must go to ncnn right now! I will paste my code below, Th
more | 其他 | その他
ncnn.zip
中文版:
我在ubuntu上make编译(在别人的帮助下编译通过,我不太懂部署相关的,但对ncnn的极致非常钦佩,所以想试一下)出sc_ncnn放到Android上执行,模型输出2个值name分别为x_lr_A、x_lr_b,但是我通过net.extract()得到的值都是nan,我不知道哪里错了。附件上传了.param和.bin,希望得到帮助,感谢!
Help, please.
The text was updated successfully, but these errors were encountered: