Skip to content

关于转onnx的问题 #117

@BaoBaoJianqiang

Description

@BaoBaoJianqiang

我看过您的convertor代码,已经成功转换为cpu版本,可以运行,大约11s一张图片。
为了进一步提升速度,我尝试转onnx,但是遇到了问题,还请指教,给出正确的转换方法,代码如下(写在TestModel.py的init方法的model.load_state_dict(d)之后):
import onnx
import onnxruntime
export_onnx_file = './net.onnx'
torch.onnx.export(model,
torch.randn(1,1,224,224,device='cuda'),
export_onnx_file,
verbose=False,
input_names = ["inputs"]+["params_%d"%i for i in range(120)],
output_names = ["outputs"],
opset_version = 10
do_constant_folding = True,
dynamic_axes = {"inputs":{0:"batch_size"}, 2:"h", 3:"w", "outputs":{0: "batch_size"}})

        net = onnx.load('./net.onnx') 
        onnx.checker.check_model(net) 
        onnx.helper.printable_graph(net.graph) 

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions