Skip to content

Commit d2a792e

Browse files
author
unknown
committed
fix router logits allgather
1 parent 61e879e commit d2a792e

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

vllm_ascend/ops/common_fused_moe.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -452,7 +452,6 @@ def forward(
452452
flashcomm_v1_enabled = forward_context.flashcomm_v1_enabled
453453
if flashcomm_v1_enabled:
454454
hidden_states = torch.ops.vllm.maybe_all_gather_and_maybe_unpad(hidden_states, True)
455-
router_logits = torch.ops.vllm.maybe_all_gather_and_maybe_unpad(router_logits, True)
456455
shared_out = self._shared_experts(hidden_states)
457456

458457
# NOTE: This is exactly the opposite of `maybe_all_reduce_tensor_model_parallel`

0 commit comments

Comments
 (0)