-
Notifications
You must be signed in to change notification settings - Fork 461
[BugFix] Fix dummy_run memory explosion in eager mode #3132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -186,6 +186,12 @@ def finalize(self, hidden_states: torch.Tensor, | |
self.moe_config.tp_group.device_group) | ||
hidden_states = torch.cat(self.split_hidden_states, dim=0) | ||
|
||
# TODO: It is a quick bugfix for the single-operator memory explosion issue | ||
# that requires further restructuring. | ||
# If the cache is not cleared after `self.split_hidden_states` is created, | ||
# it can lead to the single-operator memory explosion. | ||
del self.split_hidden_states | ||
|
||
# Unpad if necessary | ||
if self.num_tokens < hidden_states.shape[0]: | ||
hidden_states = hidden_states[:self.num_tokens] | ||
|
@@ -270,6 +276,12 @@ def finalize(self, hidden_states: torch.Tensor, | |
self.moe_config.tp_group.device_group) | ||
hidden_states = torch.cat(self.split_hidden_states, dim=0) | ||
|
||
# TODO: It is a quick bugfix for the single-operator memory explosion issue | ||
# that requires further restructuring. | ||
# If the cache is not cleared after `self.split_hidden_states` is created, | ||
# it can lead to the single-operator memory explosion. | ||
del self.split_hidden_states | ||
|
||
|
||
if self.num_tokens < hidden_states.shape[0]: | ||
hidden_states = hidden_states[:self.num_tokens] | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider not assigning
self.split_hidden_states
during the prepare phase. Instead, useget_tp_group().allgather()
in the finalize phase, which internally employsallgather_into_tensor
to eliminate tensor move overhead.