Skip to content

Conversation

ghq24int
Copy link
Contributor

Summary:
For the benchmark in the codebase, the larger the profuct of length and num-ads is, the better performance.

Two optimization:

  1. Vector loading in a warp.
  2. The product of batch-size and table-size determines the # of thread blocks (https://www.internalfb.com/code/fbsource/[cecfed562b79afad0eb9c44259141f50352da342]/fbcode/deeplearning/fbgemm/fbgemm_gpu/src/sparse_ops/sparse_reorder_batched_ad.cu?lines=361). In MRS models, we expect more thread blocks in our user cases. As such, we shrink the block size to achieve more thread blocks, thus improving compute utilization.

Performance results and local test benchmarks: D77066925

Differential Revision: D77459476

Copy link

netlify bot commented Jun 27, 2025

Deploy Preview for pytorch-fbgemm-docs ready!

Name Link
🔨 Latest commit 35dd366
🔍 Latest deploy log https://app.netlify.com/projects/pytorch-fbgemm-docs/deploys/686f7516c6f68e00083135f4
😎 Deploy Preview https://deploy-preview-4412--pytorch-fbgemm-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

ghq24int added a commit to ghq24int/FBGEMM that referenced this pull request Jun 29, 2025
…orch#4412)

Summary:

X-link: facebookresearch/FBGEMM#1483

For the benchmark in the codebase, the larger the profuct of length and num-ads is, the better performance.

Two optimization:
1. Vector loading in a warp.
2. The product of batch-size and table-size determines the # of thread blocks (https://www.internalfb.com/code/fbsource/[cecfed562b79afad0eb9c44259141f50352da342]/fbcode/deeplearning/fbgemm/fbgemm_gpu/src/sparse_ops/sparse_reorder_batched_ad.cu?lines=361). In MRS models, we expect more thread blocks in our user cases. As such, we shrink the block size to achieve more thread blocks, thus improving compute utilization.

Performance results and local test benchmarks: D77066925

Differential Revision: D77459476
ghq24int added a commit to ghq24int/FBGEMM that referenced this pull request Jun 29, 2025
…orch#4412)

Summary:

X-link: facebookresearch/FBGEMM#1483

For the benchmark in the codebase, the larger the profuct of length and num-ads is, the better performance.

Two optimization:
1. Vector loading in a warp.
2. The product of batch-size and table-size determines the # of thread blocks (https://www.internalfb.com/code/fbsource/[cecfed562b79afad0eb9c44259141f50352da342]/fbcode/deeplearning/fbgemm/fbgemm_gpu/src/sparse_ops/sparse_reorder_batched_ad.cu?lines=361). In MRS models, we expect more thread blocks in our user cases. As such, we shrink the block size to achieve more thread blocks, thus improving compute utilization.

Performance results and local test benchmarks: D77066925

Differential Revision: D77459476
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

ghq24int added a commit to ghq24int/FBGEMM that referenced this pull request Jun 29, 2025
…orch#4412)

Summary:
Pull Request resolved: pytorch#4412

X-link: facebookresearch/FBGEMM#1483

For the benchmark in the codebase, the larger the profuct of length and num-ads is, the better performance.

Two optimization:
1. Vector loading in a warp.
2. The product of batch-size and table-size determines the # of thread blocks (https://www.internalfb.com/code/fbsource/[cecfed562b79afad0eb9c44259141f50352da342]/fbcode/deeplearning/fbgemm/fbgemm_gpu/src/sparse_ops/sparse_reorder_batched_ad.cu?lines=361). In MRS models, we expect more thread blocks in our user cases. As such, we shrink the block size to achieve more thread blocks, thus improving compute utilization.

Performance results and local test benchmarks: D77066925

Differential Revision: D77459476
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

ghq24int added a commit to ghq24int/FBGEMM that referenced this pull request Jun 29, 2025
…orch#4412)

Summary:
Pull Request resolved: pytorch#4412

X-link: facebookresearch/FBGEMM#1483

For the benchmark in the codebase, the larger the profuct of length and num-ads is, the better performance.

Two optimization:
1. Vector loading in a warp.
2. The product of batch-size and table-size determines the # of thread blocks (https://www.internalfb.com/code/fbsource/[cecfed562b79afad0eb9c44259141f50352da342]/fbcode/deeplearning/fbgemm/fbgemm_gpu/src/sparse_ops/sparse_reorder_batched_ad.cu?lines=361). In MRS models, we expect more thread blocks in our user cases. As such, we shrink the block size to achieve more thread blocks, thus improving compute utilization.

Performance results and local test benchmarks: D77066925

Differential Revision: D77459476
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

@ghq24int ghq24int force-pushed the export-D77459476 branch from 0f1e843 to 9933084 Compare July 8, 2025 01:38
ghq24int added a commit to ghq24int/FBGEMM that referenced this pull request Jul 8, 2025
…orch#4412)

Summary:

X-link: facebookresearch/FBGEMM#1483

For the benchmark in the codebase, the larger the profuct of length and num-ads is, the better performance.

Two optimization:
1. Vector loading in a warp.
2. The product of batch-size and table-size determines the # of thread blocks (https://www.internalfb.com/code/fbsource/[cecfed562b79afad0eb9c44259141f50352da342]/fbcode/deeplearning/fbgemm/fbgemm_gpu/src/sparse_ops/sparse_reorder_batched_ad.cu?lines=361). In MRS models, we expect more thread blocks in our user cases. As such, we shrink the block size to achieve more thread blocks, thus improving compute utilization.

Performance results and local test benchmarks: D77066925

Differential Revision: D77459476
ghq24int added a commit to ghq24int/FBGEMM that referenced this pull request Jul 8, 2025
…orch#4412)

Summary:

X-link: facebookresearch/FBGEMM#1483

For the benchmark in the codebase, the larger the profuct of length and num-ads is, the better performance.

Two optimization:
1. Vector loading in a warp.
2. The product of batch-size and table-size determines the # of thread blocks (https://www.internalfb.com/code/fbsource/[cecfed562b79afad0eb9c44259141f50352da342]/fbcode/deeplearning/fbgemm/fbgemm_gpu/src/sparse_ops/sparse_reorder_batched_ad.cu?lines=361). In MRS models, we expect more thread blocks in our user cases. As such, we shrink the block size to achieve more thread blocks, thus improving compute utilization.

Performance results and local test benchmarks: D77066925

Differential Revision: D77459476
@ghq24int ghq24int force-pushed the export-D77459476 branch from 9933084 to e529709 Compare July 8, 2025 01:38
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

ghq24int added a commit to ghq24int/FBGEMM that referenced this pull request Jul 8, 2025
…orch#4412)

Summary:
Pull Request resolved: pytorch#4412

X-link: facebookresearch/FBGEMM#1483

For the benchmark in the codebase, the larger the profuct of length and num-ads is, the better performance.

Two optimization:
1. Vector loading in a warp.
2. The product of batch-size and table-size determines the # of thread blocks (https://www.internalfb.com/code/fbsource/[cecfed562b79afad0eb9c44259141f50352da342]/fbcode/deeplearning/fbgemm/fbgemm_gpu/src/sparse_ops/sparse_reorder_batched_ad.cu?lines=361). In MRS models, we expect more thread blocks in our user cases. As such, we shrink the block size to achieve more thread blocks, thus improving compute utilization.

Performance results and local test benchmarks: D77066925

Differential Revision: D77459476
@ghq24int ghq24int force-pushed the export-D77459476 branch from e529709 to 7e89b4f Compare July 8, 2025 01:43
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

ghq24int added a commit to ghq24int/FBGEMM that referenced this pull request Jul 8, 2025
…orch#4412)

Summary:
Pull Request resolved: pytorch#4412

X-link: facebookresearch/FBGEMM#1483

For the benchmark in the codebase, the larger the profuct of length and num-ads is, the better performance.

Two optimization:
1. Vector loading in a warp.
2. The product of batch-size and table-size determines the # of thread blocks (https://www.internalfb.com/code/fbsource/[cecfed562b79afad0eb9c44259141f50352da342]/fbcode/deeplearning/fbgemm/fbgemm_gpu/src/sparse_ops/sparse_reorder_batched_ad.cu?lines=361). In MRS models, we expect more thread blocks in our user cases. As such, we shrink the block size to achieve more thread blocks, thus improving compute utilization.

Performance results and local test benchmarks: D77066925

Differential Revision: D77459476
@ghq24int ghq24int force-pushed the export-D77459476 branch from 7e89b4f to 827c4f2 Compare July 8, 2025 01:55
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

@ghq24int ghq24int force-pushed the export-D77459476 branch from 827c4f2 to 1260a57 Compare July 9, 2025 16:53
ghq24int added a commit to ghq24int/FBGEMM that referenced this pull request Jul 9, 2025
…orch#4412)

Summary:

X-link: facebookresearch/FBGEMM#1483

For the benchmark in the codebase, the larger the profuct of length and num-ads is, the better performance.

Two optimization:
1. Vector loading in a warp.
2. The product of batch-size and table-size determines the # of thread blocks (https://www.internalfb.com/code/fbsource/[cecfed562b79afad0eb9c44259141f50352da342]/fbcode/deeplearning/fbgemm/fbgemm_gpu/src/sparse_ops/sparse_reorder_batched_ad.cu?lines=361). In MRS models, we expect more thread blocks in our user cases. As such, we shrink the block size to achieve more thread blocks, thus improving compute utilization.

Performance results and local test benchmarks: D77066925

Differential Revision: D77459476
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

ghq24int added a commit to ghq24int/FBGEMM that referenced this pull request Jul 10, 2025
…orch#4412)

Summary:

X-link: facebookresearch/FBGEMM#1483

For the benchmark in the codebase, the larger the profuct of length and num-ads is, the better performance.

Two optimization:
1. Vector loading in a warp.
2. The product of batch-size and table-size determines the # of thread blocks (https://www.internalfb.com/code/fbsource/[cecfed562b79afad0eb9c44259141f50352da342]/fbcode/deeplearning/fbgemm/fbgemm_gpu/src/sparse_ops/sparse_reorder_batched_ad.cu?lines=361). In MRS models, we expect more thread blocks in our user cases. As such, we shrink the block size to achieve more thread blocks, thus improving compute utilization.

Performance results and local test benchmarks: D77066925

Differential Revision: D77459476
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

ghq24int added a commit to ghq24int/FBGEMM that referenced this pull request Jul 10, 2025
…orch#4412)

Summary:

X-link: facebookresearch/FBGEMM#1483

For the benchmark in the codebase, the larger the profuct of length and num-ads is, the better performance.

Two optimization:
1. Vector loading in a warp.
2. The product of batch-size and table-size determines the # of thread blocks (https://www.internalfb.com/code/fbsource/[cecfed562b79afad0eb9c44259141f50352da342]/fbcode/deeplearning/fbgemm/fbgemm_gpu/src/sparse_ops/sparse_reorder_batched_ad.cu?lines=361). In MRS models, we expect more thread blocks in our user cases. As such, we shrink the block size to achieve more thread blocks, thus improving compute utilization.

Performance results and local test benchmarks: D77066925

Differential Revision: D77459476
ghq24int added a commit to ghq24int/FBGEMM that referenced this pull request Jul 10, 2025
…orch#4412)

Summary:

X-link: facebookresearch/FBGEMM#1483

For the benchmark in the codebase, the larger the profuct of length and num-ads is, the better performance.

Two optimization:
1. Vector loading in a warp.
2. The product of batch-size and table-size determines the # of thread blocks (https://www.internalfb.com/code/fbsource/[cecfed562b79afad0eb9c44259141f50352da342]/fbcode/deeplearning/fbgemm/fbgemm_gpu/src/sparse_ops/sparse_reorder_batched_ad.cu?lines=361). In MRS models, we expect more thread blocks in our user cases. As such, we shrink the block size to achieve more thread blocks, thus improving compute utilization.

Performance results and local test benchmarks: D77066925

Differential Revision: D77459476
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

ghq24int added a commit to ghq24int/FBGEMM that referenced this pull request Jul 10, 2025
…orch#4412)

Summary:
Pull Request resolved: pytorch#4412

X-link: facebookresearch/FBGEMM#1483

For the benchmark in the codebase, the larger the profuct of length and num-ads is, the better performance.

Two optimization:
1. Vector loading in a warp.
2. The product of batch-size and table-size determines the # of thread blocks (https://www.internalfb.com/code/fbsource/[cecfed562b79afad0eb9c44259141f50352da342]/fbcode/deeplearning/fbgemm/fbgemm_gpu/src/sparse_ops/sparse_reorder_batched_ad.cu?lines=361). In MRS models, we expect more thread blocks in our user cases. As such, we shrink the block size to achieve more thread blocks, thus improving compute utilization.

Performance results and local test benchmarks: D77066925

Differential Revision: D77459476
…orch#4412)

Summary:
Pull Request resolved: pytorch#4412

X-link: facebookresearch/FBGEMM#1483

For the benchmark in the codebase, the larger the profuct of length and num-ads is, the better performance.

Two optimization:
1. Vector loading in a warp.
2. The product of batch-size and table-size determines the # of thread blocks (https://www.internalfb.com/code/fbsource/[cecfed562b79afad0eb9c44259141f50352da342]/fbcode/deeplearning/fbgemm/fbgemm_gpu/src/sparse_ops/sparse_reorder_batched_ad.cu?lines=361). In MRS models, we expect more thread blocks in our user cases. As such, we shrink the block size to achieve more thread blocks, thus improving compute utilization.

Performance results and local test benchmarks: D77066925

Differential Revision: D77459476
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D77459476

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 3571258.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants