Skip to content

Commit 7ada9aa

Browse files
committed
[Bugfix][LoRA] Fix bug introduced by upstream vllm#25249
Signed-off-by: paulyu12 <507435917@qq.com>
1 parent c90a6d3 commit 7ada9aa

File tree

1 file changed

+1
-4
lines changed

1 file changed

+1
-4
lines changed

vllm_ascend/worker/model_runner_v1.py

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2525,10 +2525,7 @@ def load_model(self) -> None:
25252525
self.model.get_eagle3_aux_hidden_state_layers())
25262526

25272527
if self.lora_config:
2528-
self.model = self.load_lora_model(self.model,
2529-
self.model_config,
2530-
self.scheduler_config,
2531-
self.lora_config,
2528+
self.model = self.load_lora_model(self.model, self.vllm_config,
25322529
self.device)
25332530
logger.info("Loading model weights took %.4f GB",
25342531
m.consumed_memory / float(2**30))

0 commit comments

Comments
 (0)