You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched related issues but cannot get the expected help.
2. I have read the FAQ documentation but cannot get the expected help.
3. The bug has not been fixed in the latest version.
Describe the bug
I am trying to get partitioned ONNX file for my custom model based on CenterPoint via partition_cfg. Although I can get partitioned ONNX model via tools/deploy.py, some of the shapes are not properly inserted.
What we expect: (ONNX file from our own script)
What we get from mmdeploy:
Also, I get the following warnings, which seems to be the cause of the above issue.
[W shape_type_inference.cpp:1973] Warning: The shape inference of mmdeploy::Mark type is missing, so it may result in wrong shape inference forthe exported graph. Please consider adding itin symbolic function.
(function UpdateReliable)
[W shape_type_inference.cpp:1973] Warning: The shape inference of mmdeploy::Mark type is missing, so it may result in wrong shape inference forthe exported graph. Please consider adding itin symbolic function.
(function UpdateReliable)
[W shape_type_inference.cpp:1973] Warning: The shape inference of mmdeploy::Mark type is missing, so it may result in wrong shape inference forthe exported graph. Please consider adding itin symbolic function.
(function UpdateReliable)
...
Reproduction
Run tools/deploy.py with following config:
codebase_config=dict(
type='mmdet3d', task='VoxelDetection', model_type='end2end')
backend_config=dict(
type='tensorrt',
common_config=dict(max_workspace_size=1<<32),
model_inputs=[
...
]
)
partition_config=dict(
type='centerpoint_without_voxelization', # the partition policy nameapply_marks=True, # should always be set to Truepartition_cfg=[
dict(
save_file='pts_backbone_neck_head_centerpoint.onnx', # filename to save the partitioned onnx modelstart=['pillar_feature_net:input'], # [mark_name:input/output, ...]end=['pillar_feature_net:output'], # [mark_name:input/output, ...]output_names=['pillar_features'],
dynamic_axes={
"input_features": {
0: "num_voxels",
1: "num_max_points",
},
"pillar_features": {
0: "num_voxels",
}
},
),
],
)
onnx_config=dict(
type='onnx',
export_params=True,
keep_initializers_as_inputs=False,
opset_version=11,
save_file='end2end.onnx',
...
)
Environment
07/17 03:58:54 - mmengine - INFO -
07/17 03:58:54 - mmengine - INFO - **********Environmental information**********
07/17 03:58:55 - mmengine - INFO - sys.platform: linux
07/17 03:58:55 - mmengine - INFO - Python: 3.10.14 (main, Mar 21 2024, 16:24:04) [GCC 11.2.0]
07/17 03:58:55 - mmengine - INFO - CUDA available: True
07/17 03:58:55 - mmengine - INFO - MUSA available: False
07/17 03:58:55 - mmengine - INFO - numpy_random_seed: 2147483648
07/17 03:58:55 - mmengine - INFO - GPU 0: NVIDIA GeForce RTX 4090
07/17 03:58:55 - mmengine - INFO - CUDA_HOME: /usr/local/cuda
07/17 03:58:55 - mmengine - INFO - NVCC: Cuda compilation tools, release 12.1, V12.1.105
07/17 03:58:55 - mmengine - INFO - GCC: gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
07/17 03:58:55 - mmengine - INFO - PyTorch: 2.2.2
07/17 03:58:55 - mmengine - INFO - PyTorch compiling details: PyTorch built with:
- GCC 9.3
- C++ Version: 201703
- Intel(R) oneAPI Math Kernel Library Version 2023.1-Product Build 20230303 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v3.3.2 (Git Hash 2dc95a2ad0841e29db8b22fbccaf3e5da7992b01)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 12.1
- NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
- CuDNN 8.9.2
- Magma 2.6.1
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.9.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.2.2, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF,
07/17 03:58:55 - mmengine - INFO - TorchVision: 0.17.2
07/17 03:58:55 - mmengine - INFO - OpenCV: 4.9.0
07/17 03:58:55 - mmengine - INFO - MMEngine: 0.10.3
07/17 03:58:55 - mmengine - INFO - MMCV: 2.1.0
07/17 03:58:55 - mmengine - INFO - MMCV Compiler: GCC 11.4
07/17 03:58:55 - mmengine - INFO - MMCV CUDA Compiler: 12.1
07/17 03:58:55 - mmengine - INFO - MMDeploy: 1.3.1+
07/17 03:58:55 - mmengine - INFO -
07/17 03:58:55 - mmengine - INFO - **********Backend information**********
07/17 03:58:55 - mmengine - INFO - tensorrt: None
07/17 03:58:55 - mmengine - INFO - ONNXRuntime: None
07/17 03:58:55 - mmengine - INFO - pplnn: None
07/17 03:58:55 - mmengine - INFO - ncnn: None
07/17 03:58:55 - mmengine - INFO - snpe: None
07/17 03:58:55 - mmengine - INFO - openvino: None
07/17 03:58:55 - mmengine - INFO - torchscript: 2.2.2
07/17 03:58:55 - mmengine - INFO - torchscript custom ops: NotAvailable
07/17 03:58:55 - mmengine - INFO - rknn-toolkit: None
07/17 03:58:55 - mmengine - INFO - rknn-toolkit2: None
07/17 03:58:55 - mmengine - INFO - ascend: None
07/17 03:58:55 - mmengine - INFO - coreml: None
07/17 03:58:55 - mmengine - INFO - tvm: None
07/17 03:58:55 - mmengine - INFO - vacc: None
07/17 03:58:55 - mmengine - INFO -
07/17 03:58:55 - mmengine - INFO - **********Codebase information**********
07/17 03:58:55 - mmengine - INFO - mmdet: 3.2.0
07/17 03:58:55 - mmengine - INFO - mmseg: None
07/17 03:58:55 - mmengine - INFO - mmpretrain: None
07/17 03:58:55 - mmengine - INFO - mmocr: None
07/17 03:58:55 - mmengine - INFO - mmagic: None
07/17 03:58:55 - mmengine - INFO - mmdet3d: 1.4.0
07/17 03:58:55 - mmengine - INFO - mmpose: None
07/17 03:58:55 - mmengine - INFO - mmrotate: None
07/17 03:58:55 - mmengine - INFO - mmaction: None
07/17 03:58:55 - mmengine - INFO - mmrazor: None
07/17 03:58:55 - mmengine - INFO - mmyolo: None
Error traceback
No response
The text was updated successfully, but these errors were encountered:
kminoda
changed the title
[Bug] Warning: The shape inference of mmdeploy::Mark type is missing,
[Bug] Warning: The shape inference of mmdeploy::Mark type is missing
Jul 17, 2024
07/17 04:09:42 - mmengine - WARNING - Failed to search registry with scope "mmdet3d"in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in"mmdeploy" is used to build instance. T
his may cause unexpected failure when running the built modules. Please check whether "mmdet3d" is a correct scope, or whether the registry is initialized.
Checklist
Describe the bug
I am trying to get partitioned ONNX file for my custom model based on CenterPoint via partition_cfg. Although I can get partitioned ONNX model via
tools/deploy.py
, some of the shapes are not properly inserted.What we expect: (ONNX file from our own script)
What we get from mmdeploy:
Also, I get the following warnings, which seems to be the cause of the above issue.
Reproduction
Run
tools/deploy.py
with following config:Environment
Error traceback
No response
The text was updated successfully, but these errors were encountered: