Skip to content

hls4ml config_from_onnx_model fails when using a Resize node with no ROI (Brevitas -> QONNX) #1266

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
4 tasks done
Dyoxyz opened this issue Apr 10, 2025 · 3 comments
Open
4 tasks done
Labels

Comments

@Dyoxyz
Copy link

Dyoxyz commented Apr 10, 2025

Prerequisites

Please make sure to check off these prerequisites before submitting a bug report.

  • Test that the bug appears on the current version of the master branch. Make sure to include the commit hash of the commit you checked out.
  • Check that the issue hasn't already been reported, by checking the currently open issues.
  • If there are steps to reproduce the problem, make sure to write them down below.
  • If relevant, please include the hls4ml project files, which were created directly before and/or after the bug.

Quick summary

hls4ml config_from_onnx_model fails when using a Resize node with no ROI.

Details

hls4ml config_from_onnx_model fails when using a Resize node with no ROI comming from the conversion of a QuantUpsample from Brevitas.

Steps to Reproduce

  1. Clone the hls4ml repository
  2. Checkout the master branch, with commit hash: [77b8331]
  3. Run code below
import torch.nn as nn
import torch.nn.functional as F
import brevitas.nn as qnn
from brevitas.export import export_qonnx
import torch
import qonnx
from qonnx.core.modelwrapper import ModelWrapper
from qonnx.util.cleanup import cleanup_model
from qonnx.transformation.channels_last import ConvertToChannelsLastAndClean
from qonnx.transformation.qcdq_to_qonnx import QCDQToQuant
from qonnx.transformation.gemm_to_matmul import GemmToMatMul
import onnx

def init_weights(m):
    if isinstance(m, nn.Conv2d) or isinstance(m, nn.ConvTranspose2d):
        nn.init.kaiming_normal_(m.weight, mode="fan_out", nonlinearity="leaky_relu")
        if m.bias is not None:
            nn.init.constant_(m.bias, 0)
    elif isinstance(m, nn.BatchNorm2d):
        nn.init.constant_(m.weight, 1)
        nn.init.constant_(m.bias, 0)

class test_model(nn.Module):
    def __init__(self):
        super(test_model, self).__init__()

        self.quant_inp = qnn.QuantIdentity(bit_width=4, return_quant_tensor=True)
        self.upsample = qnn.QuantUpsample(scale_factor=2)

        for m in self.modules():
            init_weights(m)

    def forward(self, x):
        x1 = self.quant_inp(x)
        x2 = self.upsample(x1)
        

        return x2

model = test_model()
export_qonnx(model, torch.randn(1, 1, 25, 25), export_path='qmodel.onnx')

model = ModelWrapper('qmodel.onnx')
model = cleanup_model(model)
model = model.transform(ConvertToChannelsLastAndClean())
model = model.transform(GemmToMatMul())
model = cleanup_model(model)
onnx.save(model.model, 'transformed_model.onnx')

import hls4ml
from hls4ml.converters import convert_from_onnx_model
from hls4ml.utils.config import config_from_onnx_model

config = hls4ml.utils.config.config_from_onnx_model(model)

Expected behavior

Sucessfull creation of config from model.

Actual behavior

Warning:  it is recommended to pass the backend to "config_from_onnx_model"
Output layers:  ['Resize_0']
Input shape: [1, 25, 25]
Topology:
Layer name: Quant_0, layer type: Quant, current shape: [[1, 1, 25, 25]]
Traceback (most recent call last):
  File "/home/user/project/ConversionQuantUpsample.py", line 54, in <module>
    config = hls4ml.utils.config.config_from_onnx_model(
  File "/home/user/project/.venv/lib/python3.10/site-packages/hls4ml/utils/config.py", line 492, in config_from_onnx_model
    layer_list, _, _ = hls4ml.converters.parse_onnx_model(model)
  File "/home/user/project/.venv/lib/python3.10/site-packages/hls4ml/converters/onnx_to_hls.py", line 244, in parse_onnx_model
    input_shapes = get_input_shape(onnx_model.graph, node)
  File "/home/user/project/.venv/lib/python3.10/site-packages/hls4ml/converters/onnx_to_hls.py", line 76, in get_input_shape
    raise RuntimeError(f'Could not find the shape for input {inp}')
RuntimeError: Could not find the shape for input

Optional

Additional context

When printing the node.input you get ['Quant_0_out0', '', 'Resize_0_param0']. It seems it's the '' that is causing the issue. In addition when checking the model with Netron in the inputs of the Resize node there is no roi.

@Dyoxyz Dyoxyz added the bug label Apr 10, 2025
@nghielme
Copy link
Contributor

I created a fix for this issue at QONNX level:

class FillEmptyRoI(Transformation):
    "Fill empty RoI input tensor of Resize node if is empty to avoid issues during shape inference"

    def apply(self, model):
        graph_modified = False
        for i, node in enumerate(model.graph.node):
            if node.op_type == 'Resize':
                # Assuming 'roi' is the second input 
                if len(node.input) > 2 and node.input[1] == '':
                    roi = onnx.numpy_helper.from_array(np.empty([0], dtype=np.float32), node.name + "_roi")
                    model.graph.initializer.append(roi)
                    roi_value_info = helper.make_tensor_value_info(node.name + "_roi", onnx.TensorProto.FLOAT, [0])
                    model.graph.value_info.append(roi_value_info)
                    inputs = [node.input[0], node.name + "_roi", node.input[2]]
           
                    mode_string = ''
                    for attr in model.graph.node[i].attribute:
                        if attr.name == 'mode':
                            mode_string = attr.s
                    new_node = onnx.helper.make_node(
                        "Resize",
                        name=node.name,
                        coordinate_transformation_mode="asymmetric",
                        cubic_coeff_a=-0.75,
                        mode=mode_string,
                        nearest_mode="floor",
                        inputs=inputs,
                        outputs=node.output
                    )
                    model.graph.node.remove(node)
                    model.graph.node.insert(i, new_node)
                    graph_modified = True

        return (model, graph_modified)

use it in this way:

qonnx_model = qonnx_model.transform(FillEmptyRoI())

@jmitrevs I think we should add this part somewhere into QONNX repo

@Dyoxyz
Copy link
Author

Dyoxyz commented Apr 10, 2025

@nghielme Thanks, config is now working

@jmitrevs
Copy link
Contributor

@nghielme , do you want to make a PR to qonnx with your fix?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants