Skip to content

Commit 4b3bd4f

Browse files
[main][bugfix] bugfix for minicpm models (#3527)
### What this PR does / why we need it? bugfix for minicpm-2b and minicpm3-4b - vLLM version: v0.11.0rc3 - vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0 Signed-off-by: Wang Kunpeng <1289706727@qq.com>
1 parent 6c9909c commit 4b3bd4f

File tree

3 files changed

+3
-4
lines changed

3 files changed

+3
-4
lines changed

.github/workflows/vllm_ascend_test.yaml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -121,7 +121,6 @@ jobs:
121121
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/Ascend/ascend-toolkit/latest/x86_64-linux/devlib
122122
pytest -sv --cov --cov-report=xml:unittests-coverage.xml tests/ut \
123123
--ignore=tests/ut/test_platform.py \
124-
--ignore=tests/ut/patch/worker/patch_common/test_patch_minicpm.py \
125124
--ignore=tests/ut/core/test_scheduler.py \
126125
--ignore=tests/ut/kv_connector/test_llmdatadist_connector.py \
127126
--ignore=tests/ut/kv_connector/test_mooncake_connector.py \

vllm_ascend/patch/worker/patch_common/__init__.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,4 @@
2626
import vllm_ascend.patch.worker.patch_common.patch_roberta # noqa
2727
import vllm_ascend.patch.worker.patch_common.patch_weight_loader # noqa
2828
import vllm_ascend.patch.worker.patch_common.patch_multimodal_merge # noqa
29-
30-
# TODO: revert me when triton import is fixed
31-
# import vllm_ascend.patch.worker.patch_common.patch_minicpm # noqa
29+
import vllm_ascend.patch.worker.patch_common.patch_minicpm # noqa

vllm_ascend/worker/model_runner_v1.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1346,6 +1346,8 @@ def _prepare_inputs(
13461346
positions_cpu = self.positions_cpu[:num_input_tokens]
13471347
positions = self.positions[:num_input_tokens]
13481348
seq_lens_cpu = self.seq_lens_cpu[:num_reqs]
1349+
attn_state = self._build_attn_state(num_reqs, num_scheduled_tokens,
1350+
num_valid_tokens)
13491351
self.attn_mask = self._make_attention_mask(seq_lens=seq_lens_cpu,
13501352
position=positions_cpu,
13511353
attn_state=attn_state)

0 commit comments

Comments
 (0)