Skip to content

Add inplace quantizer examples #2345

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 18, 2025

Conversation

cccclai
Copy link
Contributor

@cccclai cccclai commented Jun 10, 2025

Summary:
Add a quantizer example for in place ops

Rollback Plan:

Differential Revision: D76312488

Copy link

pytorch-bot bot commented Jun 10, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2345

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 62483bc with merge base 7e7ea92 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jun 10, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76312488

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76312488

@cccclai cccclai force-pushed the export-D76312488 branch from 2779ddb to 6e1edbb Compare June 16, 2025 00:28
cccclai added a commit to cccclai/ao that referenced this pull request Jun 16, 2025
Summary:
Pull Request resolved: pytorch#2345

Add a quantizer example for in place ops

Rollback Plan:

Differential Revision: D76312488
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76312488

@cccclai cccclai force-pushed the export-D76312488 branch from 6e1edbb to 1a54331 Compare June 16, 2025 00:32
cccclai added a commit to cccclai/ao that referenced this pull request Jun 16, 2025
Summary:
Pull Request resolved: pytorch#2345

Add a quantizer example for in place ops

Rollback Plan:

Differential Revision: D76312488
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76312488

@cccclai cccclai force-pushed the export-D76312488 branch from 1a54331 to 52dbbc6 Compare June 17, 2025 00:49
cccclai added a commit to cccclai/ao that referenced this pull request Jun 17, 2025
Summary:
Pull Request resolved: pytorch#2345

Add a quantizer example for in place ops, and add a patch to the constant fold pass such that the mutable buffer won't be folded

Differential Revision: D76312488
# Identify mutable buffers by finding copy_ operations
self.mutable_buffers = self._find_mutable_buffers()

def _find_mutable_buffers(self) -> set[torch.fx.Node]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this change needed? is there a test that exercises this code path?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, the added test will fail if we have this code path. The quantize_per_tensor_default will be folded together with the mutable buffer, which is not what we want

Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense, thanks!

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76312488

cccclai added a commit to cccclai/ao that referenced this pull request Jun 17, 2025
Summary:
Pull Request resolved: pytorch#2345

Add a quantizer example for in place ops, and add a patch to the constant fold pass such that the mutable buffer won't be folded

Reviewed By: jerryzh168

Differential Revision: D76312488
@cccclai cccclai force-pushed the export-D76312488 branch from 52dbbc6 to 7c1133d Compare June 17, 2025 19:17
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76312488

cccclai added a commit to cccclai/ao that referenced this pull request Jun 17, 2025
Summary:
Pull Request resolved: pytorch#2345

Add a quantizer example for in place ops, and add a patch to the constant fold pass such that the mutable buffer won't be folded

Reviewed By: jerryzh168

Differential Revision: D76312488
@cccclai cccclai force-pushed the export-D76312488 branch from 7c1133d to 83395a6 Compare June 17, 2025 20:13
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76312488

cccclai added a commit to cccclai/ao that referenced this pull request Jun 17, 2025
Summary:
Pull Request resolved: pytorch#2345

Add a quantizer example for in place ops, and add a patch to the constant fold pass such that the mutable buffer won't be folded

Reviewed By: jerryzh168

Differential Revision: D76312488
@cccclai cccclai force-pushed the export-D76312488 branch from 83395a6 to e4e84dd Compare June 17, 2025 20:25
@cccclai cccclai marked this pull request as ready for review June 17, 2025 20:28
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76312488

cccclai added a commit to cccclai/ao that referenced this pull request Jun 17, 2025
Summary:
Pull Request resolved: pytorch#2345

Add a quantizer example for in place ops, and add a patch to the constant fold pass such that the mutable buffer won't be folded

Reviewed By: jerryzh168

Differential Revision: D76312488
@cccclai cccclai force-pushed the export-D76312488 branch from e4e84dd to f67507c Compare June 17, 2025 20:34
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76312488

cccclai added a commit to cccclai/ao that referenced this pull request Jun 17, 2025
Summary:
Pull Request resolved: pytorch#2345

Add a quantizer example for in place ops, and add a patch to the constant fold pass such that the mutable buffer won't be folded

Reviewed By: jerryzh168

Differential Revision: D76312488
@cccclai cccclai force-pushed the export-D76312488 branch from f67507c to 0498b7d Compare June 17, 2025 20:37
pytorch-bot bot pushed a commit to pytorch/pytorch that referenced this pull request Jun 17, 2025
Summary:
Similar to pytorch/ao#2345

During constant folding, we shouldn't fold the mutable buffers. In the pass, we will find out the mutable buffer first, and then skip them during folding

Test Plan:
The added unit test test_constant_folding_mutable_buffer

Rollback Plan:

Differential Revision: D76844103
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76312488

cccclai added a commit to cccclai/ao that referenced this pull request Jun 17, 2025
Summary:
Pull Request resolved: pytorch#2345

Add a quantizer example for in place ops, and add a patch to the constant fold pass such that the mutable buffer won't be folded

Reviewed By: jerryzh168

Differential Revision: D76312488
@cccclai cccclai force-pushed the export-D76312488 branch from 0498b7d to 5f38f5c Compare June 17, 2025 20:45
@jerryzh168 jerryzh168 added the topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories) label Jun 17, 2025
Summary:
Pull Request resolved: pytorch#2345

Add a quantizer example for in place ops, and add a patch to the constant fold pass such that the mutable buffer won't be folded

Reviewed By: jerryzh168

Differential Revision: D76312488
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D76312488

@cccclai cccclai force-pushed the export-D76312488 branch from 5f38f5c to 62483bc Compare June 17, 2025 20:56
@facebook-github-bot facebook-github-bot merged commit 804fa1e into pytorch:main Jun 18, 2025
20 of 21 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants