-
Notifications
You must be signed in to change notification settings - Fork 13
Add automated performance tests #45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
dff09ff
to
d74e51d
Compare
50221f5
to
6488fbc
Compare
steffi7574
approved these changes
May 13, 2025
@tdrwenski, sorry I merged the Typos branch first. Could you recompile the user_guide.pdf? I can merge then. |
No problem, will resolve that. Let me check one thing about the CI before we merge this. |
This reverts commit 39a2444.
…and remove weekly schedule
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Add automated performance regression tests.
Regarding the CI:
These run on LC machines using GitLab (ruby only for now), on
main
when there are new commits as on PRs. We use the GitHub benchmark action to ingest the results and plot on a dashboard using GitHub pages. Only the results from main are added to the dashboard. Results from other branches are just compared to latest results on the dashboard with a tolerance of 120% (this is configurable in the GitHub workflow). If they are slower than that, the GitHub workflow will fail and a comment will be automatically added to the failing commit and a failing workflow will show up on the PR.Regarding the tests:
The tests themselves are run with pytest-benchmark, with test cases specified in the
test_cases.json
. This executes quandary with a given configuration forn
iterations and records the time it takes for each run and computes an average, standard deviation, etc. The timing is computed by the pytest-benchmark tool (so is an external timer). The memory usage of the last run is added to the pytest results as well (using the output that quandary itself computes and prints outs). The pytest-benchmark tool can also be used to run the tests locally and compare to previous runs (see performance tests README).tests
directory. Regression tests were moved into the directory and paths were updated in READMEs, docs, in CI.