You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please make sure to check off these prerequisites before submitting a bug report.
Test that the bug appears on the current version of the master branch. Make sure to include the commit hash of the commit you checked out.
Check that the issue hasn't already been reported, by checking the currently open issues.
If there are steps to reproduce the problem, make sure to write them down below.
If relevant, please include the hls4ml project files, which were created directly before and/or after the bug.
Quick summary
Unsupported aggregate pragma/directive on variable 'layer63_out' as the bit-width after aggregation (6272) is larger than 4096
Details
When I work on vgg model transformations, I always run into problems with variables at the fully connected layer.
ERROR: [HLS 214-256] in function 'vgg16(hls::stream<nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>,
3u>, 0>&, hls::stream<nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 10u>, 0>&)': Unsupported aggregate
pragma/directive on variable 'layer63_out' as the bit-width after aggregation (6272) is larger than 4096 (firmware/vgg16.cpp:323:35) vgg16_prj:solution1 Mar 25, 2025, 10:43:24 PM
If default_precision='fixed<8,4>' is set, this problem does not occur, but if default_precision='fixed<16,6>' is the default setting, this problem will occur. Is there any way to avoid this problem?
Steps to Reproduce
Add what needs to be done to reproduce the bug. Add commented code examples and make sure to include the original model files / code, and the commit hash you are working on.
Clone the hls4ml repository
Checkout the master branch, with commit hash: [...]
Run conversion [...] on model file with code [...]
[Further steps ...]
Expected behavior
Please add a brief description of what you expected to happen.
Actual behavior
Describe what actually happens instead.
Optional
Possible fix
If you already know where the issue stems from, or you have a hint please let us know.
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered:
Streams in hls4ml stream one "pixel" at the time. "Pixel" being the number of in/out channels of a tensor. For example, an image of HxWxC using b bits will be streamed as HxW elements, each consisting of C*b bits. In your models, the C of some tensor (likely the number of output filters of a Conv layer) probably got large, so that C*16 is larger than 4096. The streams internally have a maximum size of the element in terms of bits. This is the constraint of HLS, not hls4ml, so the only way around it is to reduce the number of filters of your model such that all tensors are less than this limit. In some distant future we'll explore alternative implementations of streams in hls4ml that won't have this limitation (there's some PoC work already but it is not automated).
Prerequisites
Please make sure to check off these prerequisites before submitting a bug report.
Quick summary
Unsupported aggregate pragma/directive on variable 'layer63_out' as the bit-width after aggregation (6272) is larger than 4096
Details
When I work on vgg model transformations, I always run into problems with variables at the fully connected layer.
If default_precision='fixed<8,4>' is set, this problem does not occur, but if default_precision='fixed<16,6>' is the default setting, this problem will occur. Is there any way to avoid this problem?
Steps to Reproduce
Add what needs to be done to reproduce the bug. Add commented code examples and make sure to include the original model files / code, and the commit hash you are working on.
Expected behavior
Please add a brief description of what you expected to happen.
Actual behavior
Describe what actually happens instead.
Optional
Possible fix
If you already know where the issue stems from, or you have a hint please let us know.
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: