You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
### What this PR does / why we need it?
In the pd Disaggregation scenario, the first token of the inference
after the d node receives the kv follows the eager mode.
Fixes:
Running with MTP torchair graph mode with Prefilling Decoding
Disaggregation , if all requests processed by the D node are requests
just transmitted from the P node, it will break the torchair graph.
Reason: During PD Disaggregation , the P node only transmits the KV
cache and prompt to the D node, not the actual tokens inferred (neither
the main model tokens nor the MTP tokens are transmitted). Therefore,
the D node will treat this request as one without MTP tokens for
inference (seq_len=1).
The community does not have graph mode issues because the community's
attention has a seq_len=1 for each batch during the decode phase.
We have issues because the graph mode pads according to processing 2
tokens per request. When there are some seq_len=1 and some seq_len=2,
padding is done at the end. If all requests received by the D node are
seq_len=1, padding cannot be performed normally according to the
attention's fia operator constraints.
Solution:
The kv consumer uses extra torchair graph padding to avoid breaking FIA
graph constrains (The one this PR implemented).
The kv producer provides the correct tokens to the kv consumer, so that
our graph mode constraints are not broken, and all logic is the same as
the PD mixed deployment . Since we are using the community scheduler,
the modification requires patching the vllm scheduler, but
theoretically, performance should be better. (Maybe later )
Signed-off-by: xuyexiong <xuyexiong@huawei.com>
0 commit comments