-
Notifications
You must be signed in to change notification settings - Fork 452
g++ error when running build #1269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Not gonna work. A much, much smaller model may work with io_stream. And a much, much, much smaller model may work with io_parallel. Each "much" being an order of magnitude. Docs are your friend, consult them 😉 . |
Hi @vloncar, thanks for your reply, I get that it might not be synthesizable, however my issue is that the model doesn't finish the compile step of hls4ml (haven't called build yet), it generates the files I get the done flag and then it crashes the g++ compiler. I would assume that I can still get some HLS project out even if it's too big, no? :) |
Because it generates huge source files in io_parallel (you didn't pass the option, so it defaults to that), and the compiler simply fails. Looks at the memory spike on your machine when you run the compile command. I think it will compile if you use io_stream, it will be a long process but it should work on a machine with the normal amount of memory. But then when you try to run predictions you'll see how slow ap_fixed truly is :-) |
@vloncar thanks for the reply, indeed with io_stream it did compile, right away an wasn't even long. A bit puzzled though to understand what the "limit" is because it kind of crashed around 20GB of RAM which is not much on the machine I'm using. :) |
Partly, it is the difference in algorithm behind this. To achieve the best performance, in io_parallel the im2col transformation of the convolution is manually unrolled with specific instructions for each pixel (see file |
Prerequisites
Please make sure to check off these prerequisites before submitting a bug report.
Quick summary
I'm getting the following error when running the HLS4ML build process.
g++: internal compiler error: Segmentation fault signal terminated program cc1plus
Please submit a full bug report,
with preprocessed source if appropriate.
See http://bugs.almalinux.org/ for instructions.
Details
I've tried for various HLS4ML and gcc versions, within a docker container and natively on my build machine and it's getting repeated. Basically I have a rather large CNN that I'm trying to get the firmware estimates. The model is the following:
Layer (type) Output Shape Param #
conv2d (Conv2D) (None, 72, 122, 128) 1280
max_pooling2d (MaxPooling2 (None, 36, 61, 128) 0
D)
conv2d_1 (Conv2D) (None, 34, 59, 128) 147584
max_pooling2d_1 (MaxPoolin (None, 17, 29, 128) 0
g2D)
conv2d_2 (Conv2D) (None, 15, 27, 128) 147584
max_pooling2d_2 (MaxPoolin (None, 7, 13, 128) 0
g2D)
flatten (Flatten) (None, 11648) 0
dense (Dense) (None, 16) 186384
dropout (Dropout) (None, 16) 0
dense_1 (Dense) (None, 1) 17
=================================================================
Total params: 482849 (1.84 MB)
Trainable params: 482849 (1.84 MB)
Non-trainable params: 0 (0.00 Byte)
The HLS4ML versions I've tried it with are: 0.8.1, 1.0.0, 1.1.0
also tried with Vitis: 2022.0 and 2024.1
the way of compiling it is the following:
import keras
cnn_model = keras.models.load_model('deep_CNN_98acc_mar26.keras', compile=False)
cnn_model.summary()
import hls4ml
import os
os.environ['PATH'] = os.environ['XILINX_VITIS'] + '/bin:' + os.environ['PATH']
hlsConfig = hls4ml.utils.config_from_keras_model(cnn_model, granularity='name', backend='Vitis', default_precision='ap_fixed<16,6>')
hlsModel = hls4ml.converters.convert_from_keras_model(cnn_model, hls_config=hlsConfig, backend='Vitis', output_dir='adamCNN/hls4ml_prj', part='xcvu9p-fsgd2104-2L-e')
hlsModel.compile()
HLS does generate the firmare files and the project but then fails. I've reverted back on testing the HLS4ML example FC and all works fine. So not sure if this is only related to this specific model.
Steps to Reproduce
I can provide the scripts and everything if needed to reproduce.
Expected behavior
I would have expected that it would finish the compile of the model as the files are all generated.
Actual behavior
Instead I get the error that my g++ compiler crashes without any other log information. I tried to change the GCC compiler and still the same issue
Optional
Possible fix
If you already know where the issue stems from, or you have a hint please let us know.
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: