Skip to content

Commit 1874f93

Browse files
committed
fix
Signed-off-by: David9857 <985700846@qq.com>
1 parent 33e10d3 commit 1874f93

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

vllm_ascend/models/qwen3.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@
2222

2323
from vllm_ascend.ops.layernorm import AddRMSNormQuant
2424

25+
2526
class CustomQwen3Attention(Qwen3Attention):
2627

2728
def __init__(self,
@@ -201,7 +202,7 @@ def __init__(self, *, vllm_config: VllmConfig, prefix: str = ""):
201202
prefix=prefix,
202203
decoder_layer_type=CustomQwen3DecoderLayer)
203204
self.cos_sin_cache = self.layers[0].self_attn.rotary_emb.cos_sin_cache
204-
205+
205206
def forward(
206207
self,
207208
input_ids: torch.Tensor,

0 commit comments

Comments
 (0)