Skip to content

Conversation

panchao-hub
Copy link
Contributor

@panchao-hub panchao-hub commented Sep 12, 2025

What this PR does / why we need it?

[Bugfix]:replace npu_incre_flash_attention with npu_fused_infer_attention_score in order to be able to tiling update

Does this PR introduce any user-facing change?

No

How was this patch tested?

Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request replaces npu_incre_flash_attention with npu_fused_infer_attention_score as a bugfix. The core change in the attention implementation appears correct. However, an associated end-to-end test has been weakened by the removal of assertions, which compromises its ability to catch regressions. I have provided a comment to restore a basic check to maintain test integrity.

Comment on lines 211 to 212
for i in range(len(vllm_output)):
print(f"Generated text: {vllm_output[i][1]!r}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This test has been weakened by removing all assertions. It now only runs the model and prints the output, but does not verify anything about the output. If vllm_model.generate_greedy returns an empty list, this test will still pass. At a minimum, it should assert that the number of outputs matches the number of input prompts to ensure it's still a meaningful test.

Suggested change
for i in range(len(vllm_output)):
print(f"Generated text: {vllm_output[i][1]!r}")
assert len(vllm_output) == len(example_prompts)
for i in range(len(vllm_output)):
print(f"Generated text: {vllm_output[i][1]!r}")

@panchao-hub panchao-hub force-pushed the bugfix0912 branch 5 times, most recently from 36be2b6 to be3f6e5 Compare September 15, 2025 06:32
@realliujiaxu
Copy link
Contributor

Please describe why this replacement was made.

@panchao-hub
Copy link
Contributor Author

panchao-hub commented Sep 15, 2025

Please describe why this replacement was made.

The npu_incre_flash_attention interface cannot update the tiling. In addition, npu_incre_flash_attention will not be maintained in the future.

def stubbed_get_state(ep_size, with_prefill, is_deepseek_v3_r1):
return _get_fused_moe_state(16, with_prefill, is_deepseek_v3_r1)

with patch("vllm_ascend.ascend_forward_context._get_fused_moe_state",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why using patch in e2e test, it's not corrct IMO.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torchair graph only support mc2, and mc2 can be enabled only when the ep is greater than or equal to 16.

…tion_score

Signed-off-by: p00465316 <panchao13@huawei.com>
@wangxiyuan wangxiyuan added ready read for review ready-for-test start test by label for PR labels Sep 18, 2025
@wangxiyuan wangxiyuan merged commit a7f8ed3 into vllm-project:main Sep 18, 2025
47 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module:tests ready read for review ready-for-test start test by label for PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants