-
Notifications
You must be signed in to change notification settings - Fork 462
[BugFix]add all2all when dp_size > 1 && downgrade npu_dequant_swiglu_quant #819
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
15 commits
Select commit
Hold shift + click to select a range
03545de
add all2all
1c458fb
roll back to not fuse swiglu && quant
5fa1814
add force load balance
f0b120f
fix yapf
21fed64
fix ruff
965a16b
modify deepseekv2 moe
b49c6b9
remove allgather
2eb4589
Merge remote-tracking branch 'upstream/main' into all2all
50bf997
fix codecheck
a1f1f81
mv GroupCoordinatorPatch to worker
a0c8f2c
fix env variable
fda8418
remove unnecessary reduce-scatter patch
3c27ba5
add more comments for force load balance
e9a3435
Merge remote-tracking branch 'upstream/main' into all2all
6e7ea14
Merge remote-tracking branch 'upstream/main' into all2all
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
49 changes: 49 additions & 0 deletions
49
vllm_ascend/patch/worker/patch_common/patch_distributed.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,49 @@ | ||
# | ||
# Copyright (c) 2025 Huawei Technologies Co., Ltd. All Rights Reserved. | ||
# This file is a part of the vllm-ascend project. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
# | ||
|
||
from typing import List, Optional | ||
|
||
import torch | ||
import vllm | ||
from vllm.distributed.parallel_state import GroupCoordinator | ||
|
||
|
||
class GroupCoordinatorPatch(GroupCoordinator): | ||
|
||
def __init__(self, *args, **kwargs): | ||
super().__init__(*args, **kwargs) | ||
|
||
def all_to_all(self, | ||
input_: torch.Tensor, | ||
scatter_dim: int = 0, | ||
gather_dim: int = -1, | ||
scatter_sizes: Optional[List[int]] = None, | ||
gather_sizes: Optional[List[int]] = None) -> torch.Tensor: | ||
if self.world_size == 1: | ||
return input_ | ||
assert -input_.dim() <= scatter_dim < input_.dim(), ( | ||
f"Invalid scatter dim ({scatter_dim}) for input tensor with shape {input_.size()}" | ||
) | ||
assert -input_.dim() <= gather_dim < input_.dim(), ( | ||
f"Invalid gather dim ({gather_dim}) for input tensor with shape {input_.size()}" | ||
) | ||
return self.device_communicator.all_to_all(input_, scatter_dim, | ||
gather_dim, scatter_sizes, | ||
gather_sizes) | ||
|
||
|
||
vllm.distributed.parallel_state.GroupCoordinator = GroupCoordinatorPatch # Note: check the GroupCoordinator with online serving |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.