Skip to content

Conversation

ZhaoJiangJiang
Copy link
Contributor

@ZhaoJiangJiang ZhaoJiangJiang commented Aug 20, 2025

What this PR does / why we need it?

Fix mtp mode ut

Does this PR introduce any user-facing change?

Nothing

How was this patch tested?

This can be tested in the same way as a unit test.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enables a unit test for MTP speculative decoding and includes several bug fixes and refactorings to support it. The changes span across testing, attention mechanism, model definition, and worker logic. The test case correctness check is improved by comparing token IDs instead of text. A critical bug in the attention metadata creation for decode requests is fixed. A new load_weights method is added for the DeepSeek MTP model. My review focuses on the new load_weights implementation, where I've identified a potentially buggy condition for handling expert weights. The rest of the changes appear correct and improve the codebase.

Comment on lines 256 to 257
if (("mlp.experts." in name) and name not in params_dict):
continue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The condition name not in params_dict is likely incorrect or at least very confusing. name is the key from the checkpoint, and it's not expected to be in params_dict before the name is transformed. This makes the condition almost always true, so the check is effectively if "mlp.experts." in name:.

A more robust implementation, as seen in other parts of vLLM, would be to check if the transformed name exists in params_dict. This ensures that you only skip weights that are truly not part of the current model's non-expert MLP layers.

Consider changing the condition to be similar to the deepseek_v2 implementation in vLLM for clarity and correctness.

Suggested change
if (("mlp.experts." in name) and name not in params_dict):
continue
if "mlp.experts." in name and name.replace(weight_name, param_name) not in params_dict:
continue

Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

},
max_model_len=256,
enforce_eager=True)
spec_llm = LLM(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use VLLMRunner, it can clean the memory automtic

inputs_embeds, spec_step_idx)
return hidden_states

def load_weights(self, weights: Iterable[tuple[str,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please chekc that if v0.10.1 and main for vLLM has the problem, if not, please remove this override

inputs_embeds, spec_step_idx)
return hidden_states

def load_weights(self, weights: Iterable[tuple[str,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this function is needed, please add this function also in ./torchair/models/torchair_deepseek_mtp.py

@ZhaoJiangJiang ZhaoJiangJiang force-pushed the mtp_ut branch 4 times, most recently from 66d0a18 to ab667d7 Compare August 20, 2025 14:40
Copy link

codecov bot commented Aug 20, 2025

Codecov Report

❌ Patch coverage is 95.83333% with 2 lines in your changes missing coverage. Please review.
✅ Project coverage is 78.04%. Comparing base (60ac4fb) to head (6ad7d48).
⚠️ Report is 3 commits behind head on main.

Files with missing lines Patch % Lines
vllm_ascend/attention/mla_v1.py 0.00% 1 Missing ⚠️
vllm_ascend/torchair/torchair_mla.py 75.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2453      +/-   ##
==========================================
+ Coverage   77.70%   78.04%   +0.33%     
==========================================
  Files         132      132              
  Lines       17521    17557      +36     
==========================================
+ Hits        13615    13702      +87     
+ Misses       3906     3855      -51     
Flag Coverage Δ
unittests 78.04% <95.83%> (+0.33%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Signed-off-by: 赵江江 <zhaojiangjiang1@h-partners.com>
@wangxiyuan wangxiyuan merged commit 3629bc4 into vllm-project:main Aug 22, 2025
25 checks passed
wangxiaoteng888 pushed a commit to LCAIZJ/vllm-ascend that referenced this pull request Sep 25, 2025
### What this PR does / why we need it?
Fix mtp mode ut

### Does this PR introduce _any_ user-facing change?
Nothing

### How was this patch tested?
This can be tested in the same way as a unit test.


- vLLM version: v0.10.0
- vLLM main:
vllm-project/vllm@5341565

Signed-off-by: 赵江江 <zhaojiangjiang1@h-partners.com>
Co-authored-by: 赵江江 <zhaojiangjiang1@h-partners.com>
chopper0126 pushed a commit to chopper0126/vllm-ascend that referenced this pull request Sep 26, 2025
### What this PR does / why we need it?
Fix mtp mode ut

### Does this PR introduce _any_ user-facing change?
Nothing

### How was this patch tested?
This can be tested in the same way as a unit test.


- vLLM version: v0.10.0
- vLLM main:
vllm-project/vllm@5341565

Signed-off-by: 赵江江 <zhaojiangjiang1@h-partners.com>
Co-authored-by: 赵江江 <zhaojiangjiang1@h-partners.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants