Skip to content

Commit 5da2849

Browse files
fix pre-commit
Signed-off-by: chenmenglong <chenmenglong1@huawei.com>
1 parent cae6215 commit 5da2849

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

vllm_ascend/ops/moe/fused_moe_prepare_and_finalize.py

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -186,10 +186,10 @@ def finalize(self, hidden_states: torch.Tensor,
186186
self.moe_config.tp_group.device_group)
187187
hidden_states = torch.cat(self.split_hidden_states, dim=0)
188188

189-
# TODO: It is a quick bugfix for the single-operator memory explosion issue
190-
# that requires further restructuring.
191-
# If the cache is not cleared after `self.split_hidden_states` is created,
192-
# it can lead to the single-operator memory explosion.
189+
# TODO: It is a quick bugfix for the memory explosion issue in eager mode
190+
# that requires further restructuring.
191+
# If the cache is not cleared after `self.split_hidden_states` is created,
192+
# it can lead to the memory explosion in eager mode.
193193
del self.split_hidden_states
194194

195195
# Unpad if necessary
@@ -276,10 +276,10 @@ def finalize(self, hidden_states: torch.Tensor,
276276
self.moe_config.tp_group.device_group)
277277
hidden_states = torch.cat(self.split_hidden_states, dim=0)
278278

279-
# TODO: It is a quick bugfix for the single-operator memory explosion issue
280-
# that requires further restructuring.
281-
# If the cache is not cleared after `self.split_hidden_states` is created,
282-
# it can lead to the single-operator memory explosion.
279+
# TODO: It is a quick bugfix for the memory explosion issue in eager mode
280+
# that requires further restructuring.
281+
# If the cache is not cleared after `self.split_hidden_states` is created,
282+
# it can lead to the memory explosion in eager mode.
283283
del self.split_hidden_states
284284

285285
if self.num_tokens < hidden_states.shape[0]:

0 commit comments

Comments
 (0)