Skip to content

[Benchmarks] Remove additional baseline calculation #2303

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

jainapurva
Copy link
Contributor

The baseline was being calculated twice, once by default, and once for every shape and technique to calculate speedup. Removing the default baseline calculation. If baseline numbers are needed it can still be calualted by addingbaseline in quantization recipe

Copy link

pytorch-bot bot commented Jun 4, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2303

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit c51425f with merge base 152a8e3 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 4, 2025
@jainapurva jainapurva changed the title Remove additional baseline calculation [Benchmarks] Remove additional baseline calculation Jun 4, 2025
@jainapurva jainapurva added the topic: bug fix Use this tag for PRs that fix bugs label Jun 4, 2025
@jainapurva jainapurva requested a review from HDCharles June 4, 2025 17:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: bug fix Use this tag for PRs that fix bugs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants