[Perf] Reduce memory usage by splitting tokens in fused_experts and avoiding unused tensor #833
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What this PR does / why we need it?
splitting tokens in fused_experts
When
--max-model-len
=32768 onDeepSeek-R1-W8A8
, thefused_experts
function consumes about 5.75GB of memory. By splitting it into multiple executions, the memory consumption of thefused_experts
function can be reduced to 1.2GB, thereby increasing the available KVCache.The disadvantage of this solution is that when the number of prompt tokens sent by the user exceeds VLLM_FUSED_EXPERTS_SEQ_SPLIT_LENGTH, an additional
concat
operator overhead will be added.However, considering that the user's request in most scenarios will be less than
8192
, we believe that this overhead is acceptable.avoiding unused tensor
self.inputs_embeds
in NPUModelRunner V1 will always be generated, but it will only be used in multi-modal situations, so I changed its generation conditions to reduce memory usage.Does this PR introduce any user-facing change?
No
How was this patch tested?