Skip to content

Conversation

Angazenn
Copy link
Contributor

@Angazenn Angazenn commented Aug 22, 2025

What this PR does / why we need it?

Does this PR introduce any user-facing change?

How was this patch tested?

Signed-off-by: Angazenn <supperccell@163.com>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request optimizes the data parallel communication in _get_forward_metadata_across_dp by replacing a CPU-based all_reduce with an NPU-based all_gather. This is a good optimization that moves the collective operation to the accelerator. The new implementation also appears to correct the logic for determining enable_dbo across DP ranks, using an any operation which seems more appropriate. I have one suggestion to further improve the performance by minimizing data transfer between NPU and CPU.

dtype=torch.int32)
global_forward_metadata = get_dp_group().all_gather(
local_forward_metadata, dim=0)
maybe_padded_num_tokens = global_forward_metadata[:, 0].cpu().max()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

For better performance, it's generally recommended to perform reduction operations on the device (NPU) and only transfer the scalar result to the CPU. This avoids synchronizing and copying a larger tensor. You can change this line to perform the max() operation on the NPU before moving the result to the CPU using .item().

Suggested change
maybe_padded_num_tokens = global_forward_metadata[:, 0].cpu().max()
maybe_padded_num_tokens = global_forward_metadata[:, 0].max().item()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Signed-off-by: Angazenn <supperccell@163.com>
Copy link

codecov bot commented Aug 22, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 78.49%. Comparing base (60ac4fb) to head (d9e687b).
⚠️ Report is 39 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2492      +/-   ##
==========================================
+ Coverage   77.70%   78.49%   +0.78%     
==========================================
  Files         132      132              
  Lines       17521    17806     +285     
==========================================
+ Hits        13615    13976     +361     
+ Misses       3906     3830      -76     
Flag Coverage Δ
unittests 78.49% <ø> (+0.78%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@ApsarasX
Copy link
Collaborator

ApsarasX commented Aug 23, 2025

Same to #1857 ?

cc @jianzs

Signed-off-by: Angazenn <supperccell@163.com>
@Angazenn Angazenn closed this Aug 28, 2025
@Angazenn Angazenn deleted the dp branch September 8, 2025 03:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants