Skip to content

[Perf] Add new npu_fused_infer_attention_score op to improve perfomance in splitfuse cases and resolve long-seq mask problems #639

[Perf] Add new npu_fused_infer_attention_score op to improve perfomance in splitfuse cases and resolve long-seq mask problems

[Perf] Add new npu_fused_infer_attention_score op to improve perfomance in splitfuse cases and resolve long-seq mask problems #639

Triggered via pull request September 20, 2025 08:30
Status Skipped
Total duration 1s
Artifacts

vllm_ascend_test_310p.yaml

on: pull_request
Matrix: 310p e2e test
Fit to window
Zoom out
Zoom in