Skip to content

导出onnx报错 #110

@jyh11111

Description

@jyh11111

python tools/deployment/export_onnx.py -c <配置文件路径> -r <检查点路径> [--check] [--simplify] 这个报错,提示如下:

_D:\deim\DEIM> python tools/deployment/export_onnx.py -c configs\deim_dfine\my_deim_hgnetv2_n_coco.yml -r deim_outputs\deim_hgnetv2_n_coco\best_stg2.pth
D:\deim\DEIM\tools\deployment../..\engine\deim\dfine_decoder.py:644: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if memory.shape[0] > 1:
D:\deim\DEIM\tools\deployment../..\engine\deim\dfine_decoder.py:129: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if reference_points.shape[-1] == 2:
D:\deim\DEIM\tools\deployment../..\engine\deim\dfine_decoder.py:133: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
elif reference_points.shape[-1] == 4:
D:\miniconda3\envs\deim\Lib\site-packages\torch\onnx_internal\jit_utils.py:307: UserWarning: Constant folding - Only steps=1 can be constant folded for opset >= 10 onnx::Slice op. Constant folding not applied. (Triggered internally at C:\cb\pytorch_1000000000000\work\torch\csrc\jit\passes\onnx\constant_fold.cpp:181.)
_C._jit_pass_onnx_node_shape_type_inference(node, params_dict, opset_version)
Traceback (most recent call last):
File "D:\deim\DEIM\tools\deployment\export_onnx.py", line 103, in
main(args)
File "D:\deim\DEIM\tools\deployment\export_onnx.py", line 65, in main
torch.onnx.export(
File "D:\miniconda3\envs\deim\Lib\site-packages\torch\onnx\utils.py", line 516, in export
_export(
File "D:\miniconda3\envs\deim\Lib\site-packages\torch\onnx\utils.py", line 1596, in _export
graph, params_dict, torch_out = _model_to_graph(
^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\deim\Lib\site-packages\torch\onnx\utils.py", line 1139, in _model_to_graph
graph = _optimize_graph(
^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\deim\Lib\site-packages\torch\onnx\utils.py", line 677, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\deim\Lib\site-packages\torch\onnx\utils.py", line 1940, in _run_symbolic_function
return symbolic_fn(graph_context, *inputs, **attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\deim\Lib\site-packages\torch\onnx\symbolic_helper.py", line 395, in wrapper
return fn(g, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\deim\Lib\site-packages\torch\onnx\symbolic_helper.py", line 306, in wrapper
return fn(g, args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\deim\Lib\site-packages\torch\onnx\symbolic_opset10.py", line 203, in symbolic_fn
padding_ceil = opset9.get_pool_ceil_padding(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\deim\Lib\site-packages\torch\onnx\symbolic_opset9.py", line 1565, in get_pool_ceil_padding
return symbolic_helper._unimplemented(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\deim\Lib\site-packages\torch\onnx\symbolic_helper.py", line 612, in _unimplemented
_onnx_unsupported(f"{op}, {msg}", value)
File "D:\miniconda3\envs\deim\Lib\site-packages\torch\onnx\symbolic_helper.py", line 623, in _onnx_unsupported
raise errors.SymbolicValueError(
torch.onnx.errors.SymbolicValueError: Unsupported: ONNX export of operator get_pool_ceil_padding, input size not accessible. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues [Caused by the value 'input.8 defined in (%input.8 : Float(
, *, *, *, strides=[1648656, 103041, 321, 1], requires_grad=1, device=cpu) = onnx::Pad[mode="constant"](%519, %545, %506), scope: main.main..Model::/engine.deim.deim.DEIM::model/engine.backbone.hgnetv2.HGNetv2::backbone/engine.backbone.hgnetv2.StemBlock::stem # D:\deim\DEIM\tools\deployment../..\engine\backbone\hgnetv2.py:168:0
)' (type 'Tensor') in the TorchScript graph. The containing node has kind 'onnx::Pad'.]
(node defined in D:\deim\DEIM\tools\deployment../..\engine\backbone\hgnetv2.py(168): forward
D:\miniconda3\envs\deim\Lib\site-packages\torch\nn\modules\module.py(1508): _slow_forward
D:\miniconda3\envs\deim\Lib\site-packages\torch\nn\modules\module.py(1527): _call_impl
D:\miniconda3\envs\deim\Lib\site-packages\torch\nn\modules\module.py(1518): _wrapped_call_impl
D:\deim\DEIM\tools\deployment../..\engine\backbone\hgnetv2.py(537): forward
D:\miniconda3\envs\deim\Lib\site-packages\torch\nn\modules\module.py(1508): _slow_forward
D:\miniconda3\envs\deim\Lib\site-packages\torch\nn\modules\module.py(1527): _call_impl
D:\miniconda3\envs\deim\Lib\site-packages\torch\nn\modules\module.py(1518): _wrapped_call_impl
D:\deim\DEIM\tools\deployment../..\engine\deim\deim.py(27): forward
D:\miniconda3\envs\deim\Lib\site-packages\torch\nn\modules\module.py(1508): _slow_forward
D:\miniconda3\envs\deim\Lib\site-packages\torch\nn\modules\module.py(1527): _call_impl
D:\miniconda3\envs\deim\Lib\site-packages\torch\nn\modules\module.py(1518): _wrapped_call_impl
D:\deim\DEIM\tools\deployment\export_onnx.py(48): forward
D:\miniconda3\envs\deim\Lib\site-packages\torch\nn\modules\module.py(1508): _slow_forward
D:\miniconda3\envs\deim\Lib\site-packages\torch\nn\modules\module.py(1527): _call_impl
D:\miniconda3\envs\deim\Lib\site-packages\torch\nn\modules\module.py(1518): _wrapped_call_impl
D:\miniconda3\envs\deim\Lib\site-packages\torch\jit_trace.py(124): wrapper
D:\miniconda3\envs\deim\Lib\site-packages\torch\jit_trace.py(133): forward
D:\miniconda3\envs\deim\Lib\site-packages\torch\nn\modules\module.py(1527): _call_impl
D:\miniconda3\envs\deim\Lib\site-packages\torch\nn\modules\module.py(1518): _wrapped_call_impl
D:\miniconda3\envs\deim\Lib\site-packages\torch\jit_trace.py(1285): _get_trace_graph
D:\miniconda3\envs\deim\Lib\site-packages\torch\onnx\utils.py(915): _trace_and_get_graph_from_model
D:\miniconda3\envs\deim\Lib\site-packages\torch\onnx\utils.py(1011): _create_jit_graph
D:\miniconda3\envs\deim\Lib\site-packages\torch\onnx\utils.py(1135): _model_to_graph
D:\miniconda3\envs\deim\Lib\site-packages\torch\onnx\utils.py(1596): _export
D:\miniconda3\envs\deim\Lib\site-packages\torch\onnx\utils.py(516): export
D:\deim\DEIM\tools\deployment\export_onnx.py(65): main
D:\deim\DEIM\tools\deployment\export_onnx.py(103):
)

Inputs:
    #0: 519 defined in (%519 : Float(*, 16, 320, 320, strides=[1638400, 102400, 320, 1], requires_grad=1, device=cpu) = onnx::Add(%518, %model.backbone.stem.stem1.lab.bias), scope: __main__.main.<locals>.Model::/engine.deim.deim.DEIM::model/engine.backbone.hgnetv2.HGNetv2::backbone/engine.backbone.hgnetv2.StemBlock::stem/engine.backbone.hgnetv2.ConvBNAct::stem1/engine.backbone.hgnetv2.LearnableAffineBlock::lab # D:\deim\DEIM\tools\deployment\../..\engine\backbone\hgnetv2.py:35:0
)  (type 'Tensor')
    #1: 545 defined in (%545 : Long(8, strides=[1], device=cpu) = onnx::Cast[to=7](%544), scope: __main__.main.<locals>.Model::/engine.deim.deim.DEIM::model/engine.backbone.hgnetv2.HGNetv2::backbone/engine.backbone.hgnetv2.StemBlock::stem # D:\deim\DEIM\tools\deployment\../..\engine\backbone\hgnetv2.py:168:0
)  (type 'Tensor')
    #2: 506 defined in (%506 : NoneType = prim::Constant(), scope: __main__.main.<locals>.Model::/engine.deim.deim.DEIM::model/engine.backbone.hgnetv2.HGNetv2::backbone/engine.backbone.hgnetv2.StemBlock::stem/engine.backbone.hgnetv2.ConvBNAct::stem1/torch.nn.modules.conv.Conv2d::conv
)  (type 'NoneType')
Outputs:
    #0: input.8 defined in (%input.8 : Float(*, *, *, *, strides=[1648656, 103041, 321, 1], requires_grad=1, device=cpu) = onnx::Pad[mode="constant"](%519, %545, %506), scope: __main__.main.<locals>.Model::/engine.deim.deim.DEIM::model/engine.backbone.hgnetv2.HGNetv2::backbone/engine.backbone.hgnetv2.StemBlock::stem # D:\deim\DEIM\tools\deployment\../..\engine\backbone\hgnetv2.py:168:0
)  (type 'Tensor')_

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions