-
Notifications
You must be signed in to change notification settings - Fork 543
Restrict ONNX opset to 16 and up #3051
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Update struct TensorData field access patterns - Add support for non-optional data fields - Fix issues with tensor data handling
The PR resolves several compilation errors caused by changes to the TensorType structure: 1. Modified code to adapt to the removal of the `shape` field from TensorType 2. Fixed pattern matching issues in rank_inference.rs to properly match against TensorData.data 3. Updated from_onnx.rs's remap_unsqueeze_to_reshape function to work with the new API 4. Fixed unused imports across multiple files 5. Fixed function calls that were using Option.len() incorrectly
Enhance tensor type system to support both static shapes and dynamic ranks across multiple ONNX operations including Expand, RandomNormal, Constant, and related nodes. Ensure proper shape validation and improve type safety throughout the conversion process.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3051 +/- ##
==========================================
- Coverage 81.12% 81.10% -0.03%
==========================================
Files 816 816
Lines 117358 117312 -46
==========================================
- Hits 95210 95142 -68
- Misses 22148 22170 +22 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It makes sense to restrict to more recent opsets. The ONNX spec is already large enough, we should try to limit the scope 😅
In this case, I would simply delete the tests and ONNX files associated with older opsets.
1a8e016
to
db436b9
Compare
0c4de8d
to
a4eab9b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM overall! Mostly minor comments.
Will wait for the linked PR to be merged before this review is finalized.
burn-book/src/import/onnx-model.md
Outdated
Option 1: Use the provided utility script: | ||
|
||
``` | ||
uv run --script https://raw.githubusercontent.com/tracel-ai/burn/refs/heads/main/crates/burn-import/onnx-opset-upgrade.py |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should probably provide default command using python (regardless of package manager used)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True but I am seeing more people shifting towards uv
. We are using it consistently with Burn too.
There are a few convenient aspect of this example:
- uv downloads the script directly and executes.
- creates necessary virtual environment
- installs the dependency packages specified in the script itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I am not advocating to remove the uv command entirely, just that we might want to provide the basic steps to convert the model with python using the provided script 🙂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They can download the script and run in python like python3 onnx_opset_upgrade.py
too or they the can follow the python script steps.
Another alternative is to call uv
as a library. We can put it behind a feature flag. This pattern can also work with dataset crate. See astral-sh/uv#4642
a4eab9b
to
f607b89
Compare
burn-book/src/import/onnx-model.md
Outdated
```python | ||
import onnx | ||
from onnx import version_converter | ||
|
||
# Load the ONNX model | ||
model = onnx.load('path/to/your/model.onnx') | ||
|
||
# Upgrade to opset version 16 | ||
converted_model = version_converter.convert_version(model, 16) | ||
|
||
# Save the upgraded model | ||
onnx.save(converted_model, 'upgraded_model.onnx') | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Btw, why are there so many more steps in the onnx_opset_upgrade.py
script? Actually, most of the additional code is for debug info or user input, but what about the additional shape inference?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated the document to include that step. I added it as a safe measure in case the older opset files didn't infer shapes already. But also as a precursor to use inferred node output shapes. For now, I think we can figure out Input/Output ranks from the static shapes.
Related Issue:
Fixes #3046
Changes:
clip_opset7
,dropout_opset7
,reduce_sum_opset11
) to align with opset 16.Testing:
Checklist:
run-checks all
script.