Releases: vllm-project/vllm-ascend
v0.11.0rc2
This is the second release candidate of v0.11.0 for vLLM Ascend. In this release, we solved many bugs to improve the quality. Thanks for all your feedback. We'll keep working on bug fix and performance improvement. The v0.11.0 official release will come soon. Please follow the official doc to get started.
Highlights
- CANN is upgraded to 8.3.RC2. #4332
- Ngram spec decode method is back now. #4092
- The performance of aclgraph is improved by updating default capture size. #4205
Core
- Speed up vLLM startup time. #4099
- Kimi k2 with quantization works now. #4190
- Fix a bug for qwen3-next. It's more stable now. #4025
Other
- Fix an issue for full decode only mode. Full graph mode is more stable now. #4106 #4282
- Fix a allgather ops bug for DeepSeek V3 series models. #3711
- Fix some bugs for EPLB feature. #4150 #4334
- Fix a bug that vl model doesn't work on x86 machine. #4285
- Support ipv6 for prefill disaggregation proxy. Please note that mooncake connector doesn't work with ipv6. We're working on it. #4242
- Add a check that to ensure EPLB only support w8a8 method for quantization case. #4315
- Add a check that to ensure FLASHCOMM feature doesn't work with vl model. It'll be supported in 2025 Q4 #4222
- Audio required library is installed in container. #4324
Known Issues
- Ray + EP doesn't work, if you run vLLM Ascend with ray, please disable expert parallelism. #4123
response_formatparameter is not supported yet. We'll support it soon. #4175- cpu bind feature doesn't work for multi instance case(Such as multi DP on one node). We'll fix it in the next release.
Full Changelog: v0.11.0rc1...v0.11.0rc2
v0.11.0rc1
This is the first release candidate of v0.11.0 for vLLM Ascend. Please follow the official doc to get started.
v0.11.0 will be the next official release version of vLLM Ascend. We'll release it in the next few days. Any feedback is welcome to help us to improve v0.11.0.
Highlights
- CANN is upgrade to 8.3.RC1. Torch-npu is upgrade to 2.7.1. #3945 #3896
- PrefixCache and Chunked Prefill are enabled by default. #3967
- W4A4 quantization is supported now. #3427 Official tutorial is available at here.
- The official documentation has now been switched to https://docs.vllm.ai/projects/ascend.
Core
- Performance of Qwen3 and Deepseek V3 series models are improved.
- Mooncake layerwise connector is supported now #2602. Find tutorial here.
- MTP > 1 is supported now. #2708
- [Experimental] Graph mode
FULL_DECODE_ONLYis supported now! AndFULLwill be landing in the next few weeks. #2128 - Pooling models, such as bge-m3, are supported now. #3171
Other
- Refactor the MOE module to make it clearer and easier to understand and the performance has improved in both quantitative and non-quantitative scenarios.
- Refactor model register module to make it easier to maintain. We'll remove this module in Q4 2025. #3004
- LLMDatadist KV Connector is deprecated. We'll remove it in Q1 2026.
- Refactor the linear module to support features flashcomm1 and flashcomm2 in paper flashcomm #3004 #3334
Known issue
- With PD disaggragate + fullgraph case, the memory may be leaked and the service may be stuck after long time serving. This is a bug from torch-npu, we'll upgrade and fix it soon.
- The accuracy of qwen2.5 VL is not very good with BF16 on videobench data collection. This is a bug lead by CANN, we'll fix it soon.
- For long sequence input case(>32k), there is no response sometimes and the kv cache usage is become higher. This is a bug from vLLM scheduler. We are working on it. Temporary solution is to set
max-model-lento a suitable value - Qwen2-audio doesn't work by default, we're fixing it. Temporary solution is to set
--gpu-memory-utilizationto a suitable value, such as 0.8. - When running Qwen3-Next with expert parallel enabled, please set
HCCL_BUFFSIZEenvironment variable to a suitable value, such as 1024. - The accuracy of DeepSeek3.2 with aclgraph is not correct. Temporary solution is to set
cudagraph_capture_sizesto a suitable value depending on the batch size for the input.
New Contributors
- @huangdong2022 made their first contribution in #3205
- @kiscad made their first contribution in #3226
- @dsxsteven made their first contribution in #3381
- @elilzhu made their first contribution in #3426
- @yuzhup made their first contribution in #3203
- @DreamerLeader made their first contribution in #3476
- @yechao237 made their first contribution in #3473
- @leijie-cn made their first contribution in #3519
- @Anionex made their first contribution in #3311
- @Semmer2 made their first contribution in #4041
Full Changelog: v0.11.0rc0...v0.11.0rc1
v0.11.0rc0
This is the special release candidate of v0.11.0 for vLLM Ascend. Please follow the official doc to get started.
Highlights
- DeepSeek V3.2 is supported now. #3270 Please follow the official guide to take a try.
- Qwen3-vl is supported now. #3103
Core
- DeepSeek works with aclgraph now. #2707
- MTP works with aclgraph now. #2932
- EPLB is supported now. #2956
- Mooncacke store kvcache connector is supported now. #2913
- CPU offload connector is supported now. #1659
Other
- Qwen3-next is stable now. #3007
- Fixed a lot of bugs introduced in v0.10.2 by Qwen3-next. #2964 #2781 #3070 #3113
- The LoRA feature is back now. #3044
- Eagle3 spec decode method is back now. #2949
New Contributors
- @offline893 made their first contribution in #2956
- @1Fire4 made their first contribution in #2869
- @jesse996 made their first contribution in #2796
- @Lucaskabela made their first contribution in #2969
- @qyqc731 made their first contribution in #2962
- @Mercykid-bash made their first contribution in #3042
- @MaoJianwei made their first contribution in #3116
- @booker123456 made their first contribution in #3071
- @Csrayz made their first contribution in #2372
- @Clorist33 made their first contribution in #3035
- @clrs97 made their first contribution in #2931
- @zzhx1 made their first contribution in #3027
- @mfyCn-1204 made their first contribution in #3123
- @dragondream-chen made their first contribution in #3132
- @florenceCH made their first contribution in #3126
- @slippersss made their first contribution in #3153
- @socrahow made their first contribution in #3151
Full Changelog: v0.10.2rc1...v0.11.0rc0
v0.10.2rc1
This is the 1st release candidate of v0.10.2 for vLLM Ascend. Please follow the official doc to get started.
Highlights
- Add support for Qwen3 Next. Please note that expert parallel and MTP feature doesn't work with this release. We'll make it work enough soon. Follow the official guide to get start #2917
- Add quantization support for aclgraph #2841
Core
- Aclgraph now works with Ray backend. #2589
- MTP now works with the token > 1. #2708
- Qwen2.5 VL now works with quantization. #2778
- Improved the performance with async scheduler enabled. #2783
- Fixed the performance regression with non MLA model when use default scheduler. #2894
Other
- The performance of w8a8 quantization is improved. #2275
- The performance of moe model is improved. #2689 #2842
- Fixed resources limit error when apply speculative decoding and aclgraph. #2472
- Fixed the git config error in docker images. #2746
- Fixed the sliding windows attention bug with prefill. #2758
- The official doc for Prefill Decode Disaggregation with Qwen3 is added. #2751
VLLM_ENABLE_FUSED_EXPERTS_ALLGATHER_EPenv works again. #2740- A new improvement for oproj in deepseek is added. Set
oproj_tensor_parallel_sizeto enable this feature#2167 - Fix a bug that deepseek with torchair doesn't work as expect when
graph_batch_sizesis set. #2760 - Avoid duplicate generation of sin_cos_cache in rope when kv_seqlen > 4k. #2744
- The performance of Qwen3 dense model is improved with flashcomm_v1. Set
VLLM_ASCEND_ENABLE_DENSE_OPTIMIZE=1andVLLM_ASCEND_ENABLE_FLASHCOMM=1to enable it. #2779 - The performance of Qwen3 dense model is improved with prefetch feature. Set
VLLM_ASCEND_ENABLE_PREFETCH_MLP=1to enable it. #2816 - The performance of Qwen3 MoE model is improved with rope ops update. #2571
- Fix the weight load error for RLHF case. #2756
- Add warm_up_atb step to speed up the inference. #2823
- Fixed the aclgraph steam error for moe model. #2827
Known issue
- The server will be hang when running Prefill Decode Disaggregation with different TP size for P and D. It's fixed by vLLM commit which is not included in v0.10.2. You can pick this commit to fix the issue.
- The HBM usage of Qwen3 Next is higher than expected. It's a known issue and we're working on it. You can set
max_model_lenandgpu_memory_utilizationto suitable value basing on your parallel config to avoid oom error. - We notice that lora doesn't work with this release due to the refactor of kv cache. We'll fix it soon. 2941
- Please do not enable chunked prefill with prefix cache when running with Ascend scheduler. The performance and accuracy is not good/correct. #2943
New Contributors
- @WithHades made their first contribution in #2589
- @vllm-ascend-ci made their first contribution in #2755
- @1092626063 made their first contribution in #2708
- @marcobarlo made their first contribution in #2039
- @realliujiaxu made their first contribution in #2719
- @machenglong2025 made their first contribution in #2805
- @fffrog made their first contribution in #2815
- @anon189Ty made their first contribution in #2619
- @zhaozx-cn made their first contribution in #2787
- @wenba0 made their first contribution in #2778
- @wuweiqiang24 made their first contribution in #2814
- @wyu0-0 made their first contribution in #2857
- @nwpu-zxr made their first contribution in #2824
Full Changelog: v0.10.1rc1...v0.10.2rc1
v0.10.1rc1
This is the 1st release candidate of v0.10.1 for vLLM Ascend. Please follow the official doc to get started.
Highlights
- LoRA Performance improved much through adding Custom Kernels by China Merchants Bank. #2325
- Support Mooncake TransferEngine for kv cache register and pull_blocks style disaggregate prefill implementation. #1568
- Support capture custom ops into aclgraph now. #2113
Core
- Add MLP tensor parallel to improve performance, but note that this will increase memory usage. #2120
- openEuler is upgraded to 24.03. #2631
- Add custom lmhead tensor parallel to achieve reduced memory consumption and improved TPOT performance. #2309
- Qwen3 MoE/Qwen2.5 support torchair graph now. #2403
- Support Sliding Window Attention with AscendSceduler, thus fixing Gemma3 accuracy issue. #2528
Other
- Bug fixes:
- Update the graph capture size calculation, somehow alleviated the problem that npu stream not enough in some scenarios #2511
- Fix bugs and refactor cached mask generation logic. #2442
- Fix the nz format does not work in quantization scenarios. #2549
- Fix accuracy issue on Qwen series caused by enabling
enable_shared_pert_dpby default. #2457 - Fix accuracy issue on models whose rope dim is not equal to head dim, e.g., GLM4.5. #2601
- Performance improved through a lot of prs:
- A batch of refactoring prs to enhance the code architecture:
- Parameters changes:
- Add
lmhead_tensor_parallel_sizeinadditional_config, set it to enable lmhead tensor parallel. #2309 - Some unused environ variables
HCCN_PATH,PROMPT_DEVICE_ID,DECODE_DEVICE_ID,LLMDATADIST_COMM_PORTandLLMDATADIST_SYNC_CACHE_WAIT_TIMEare removed. #2448 - Environ variable
VLLM_LLMDD_RPC_PORTis renamed toVLLM_ASCEND_LLMDD_RPC_PORTnow. #2450 - Add
VLLM_ASCEND_ENABLE_MLP_OPTIMIZEin environ variables, Whether to enable mlp optimize when tensor parallel is enabled, this feature in eager mode will get better performance. #2120 - Remove
MOE_ALL2ALL_BUFFERandVLLM_ASCEND_ENABLE_MOE_ALL2ALL_SEQin environ variables.#2612 - Add
enable_prefetchinadditional_config, whether to enable weight prefetch. #2465 - Add
modeinadditional_config.torchair_graph_config, When using reduce-overhead mode for torchair, mode needs to be set. #2461 enable_shared_expert_dpinadditional_configis disabled by default now, and it is recommended to enable when inferencing with deepseek. #2457
- Add
Known Issues
- Sliding window attention not support chunked prefill currently, thus we could only enable AscendScheduler to run with it. #2729
- There is a bug with creating mc2_mask when MultiStream is enabled, will fix it in next release. #2681
New Contributors
- @lidenghui1110 made their first contribution in #1917
- @haojiangzheng made their first contribution in #1772
- @QwertyJack made their first contribution in #2298
- @LCAIZJ made their first contribution in #1568
- @liuchenbing made their first contribution in #2325
- @gameofdimension made their first contribution in #2407
- @NicholasTao made their first contribution in #2403
- @ZhaoJiangJiang made their first contribution in #2453
- @s-jiayang made their first contribution in #2373
- @NSDie made their first contribution in #2528
- @panchao-hub made their first contribution in #2639
- @zzy-ContiLearn made their first contribution in #2541
- @baxingpiaochong made their first contribution in #2664
Full Changelog: v0.10.0rc1...v0.10.1rc1
v0.9.1
We are excited to announce the newest official release of vLLM Ascend. This release includes many feature supports, performance improvements and bug fixes. We recommend users to upgrade from 0.7.3 to this version. Please always set VLLM_USE_V1=1 to use V1 engine.
In this release, we added many enhancements for large scale expert parallel case. It's recommended to follow the official guide.
Please note that this release note will list all the important changes from last official release(v0.7.3)
Highlights
- DeepSeek V3/R1 is supported with high quality and performance. MTP can work with DeepSeek as well. Please refer to muliti node tutorials and Large Scale Expert Parallelism.
- Qwen series models work with graph mode now. It works by default with V1 Engine. Please refer to Qwen tutorials.
- Disaggregated Prefilling support for V1 Engine. Please refer to Large Scale Expert Parallelism tutorials.
- Automatic prefix caching and chunked prefill feature is supported.
- Speculative decoding feature works with Ngram and MTP method.
- MOE and dense w4a8 quantization support now. Please refer to quantization guide.
- Sleep Mode feature is supported for V1 engine. Please refer to Sleep mode tutorials.
- Dynamic and Static EPLB support is added. This feature is still experimental.
Note
The following notes are especially for reference when upgrading from last final release (v0.7.3):
- V0 Engine is not supported from this release. Please always set
VLLM_USE_V1=1to use V1 engine with vLLM Ascend. - Mindie Turbo is not needed with this release. And the old version of Mindie Turbo is not compatible. Please do not install it. Currently all the function and enhancement is included in vLLM Ascend already. We'll consider to add it back in the future in needed.
- Torch-npu is upgraded to 2.5.1.post1. CANN is upgraded to 8.2.RC1. Don't forget to upgrade them.
Core
- The Ascend scheduler is added for V1 engine. This scheduler is more affine with Ascend hardware.
- Structured output feature works now on V1 Engine.
- A batch of custom ops are added to improve the performance.
Changes
- EPLB support for Qwen3-moe model. #2000
- Fix the bug that MTP doesn't work well with Prefill Decode Disaggregation. #2610 #2554 #2531
- Fix few bugs to make sure Prefill Decode Disaggregation works well. #2538 #2509 #2502
- Fix file not found error with shutil.rmtree in torchair mode. #2506
Known Issues
- When running MoE model, Aclgraph mode only work with tensor parallel. DP/EP doesn't work in this release.
- Pipeline parallelism is not supported in this release for V1 engine.
- If you use w4a8 quantization with eager mode, please set
VLLM_ASCEND_MLA_PARALLEL=1to avoid oom error. - Accuracy test with some tools may not be correct. It doesn't affect the real user case. We'll fix it in the next post release. #2654
- We notice that there are still some problems when running vLLM Ascend with Prefill Decode Disaggregation. For example, the memory may be leaked and the service may be stuck. It's caused by known issue by vLLM and vLLM Ascend. We'll fix it in the next post release. #2650 #2604 vLLM#22736 vLLM#23554 vLLM#23981
v0.9.1rc3
This is the 3rd release candidate of v0.9.1 for vLLM Ascend. Please follow the official doc to get started.
Core
- MTP supports V1 scheduler #2371
- Add LMhead TP communication groups #1956
- Fix the bug that qwen3 moe doesn't work with aclgraph #2478
- Fix
grammar_bitmaskIndexError caused by outdatedapply_grammar_bitmaskmethod #2314 - Remove
chunked_prefill_for_mla#2177 - Fix bugs and refactor cached mask generation logic #2326
- Fix configuration check logic about ascend scheduler #2327
- Cancel the verification between deepseek-mtp and non-ascend scheduler in disaggregated-prefill deployment #2368
- Fix issue that failed with ray distributed backend #2306
- Fix incorrect req block length in ascend scheduler #2394
- Fix header include issue in rope #2398
- Fix mtp config bug #2412
- Fix error info and adapt
attn_metedatarefactor #2402 - Fix torchair runtime errror caused by configuration mismtaches and
.kv_cache_bytesfile missing #2312 - Move
with_prefillallreduce from cpu to npu #2230
Docs
- Add document for deepseek large EP #2339
Known Issues
- Full graph mode support are not yet available for some case with
full_cuda_graphenable. #2182
Full Changelog: v0.9.1rc2...v0.9.1rc3
v0.10.0rc1
This is the 1st release candidate of v0.10.0 for vLLM Ascend. Please follow the official doc to get started. V0 is completely removed from this version.
Highlights
- Disaggregate prefill works with V1 engine now. You can take a try with DeepSeek model #950, following this tutorial.
- W4A8 quantization method is supported for dense and MoE model now. #2060 #2172
Core
- Ascend PyTorch adapter (torch_npu) has been upgraded to
2.7.1.dev20250724. #1562 And CANN has been upgraded to8.2.RC1. #1653 Don’t forget to update them in your environment or using the latest images. - vLLM Ascend works on Atlas 800I A3 now, and the image on A3 will be released from this version on. #1582
- Kimi-K2 with w8a8 quantization, Qwen3-Coder and GLM-4.5 is supported in vLLM Ascend, please following this tutorial to have a try. #2162
- Pipeline Parallelism is supported in V1 now. #1800
- Prefix cache feature now work with the Ascend Scheduler. #1446
- Torchair graph mode works with tp > 4 now. #1508
- MTP support torchair graph mode now #2145
Other
-
Bug fixes:
-
Performance improved through a lot of prs:
- Caching sin/cos instead of calculate it every layer. #1890
- Improve shared expert multi-stream parallelism #1891
- Implement the fusion of allreduce and matmul in prefill phase when tp is enabled. Enable this feature by setting
VLLM_ASCEND_ENABLE_MATMUL_ALLREDUCEto1. #1926 - Optimize Quantized MoE Performance by Reducing All2All Communication. #2195
- Use AddRmsNormQuant ops in the custom model to optimize Qwen3's performance #1806
- Use multicast to avoid padding decode request to prefill size #1555
- The performance of LoRA has been improved. #1884
-
A batch of refactoring prs to enhance the code architecture:
-
Parameters changes:
expert_tensor_parallel_sizeinadditional_configis removed now, and the EP and TP is aligned with vLLM now. #1681- Add
VLLM_ASCEND_MLA_PAin environ variables, use this to enable mla paged attention operator for deepseek mla decode. - Add
VLLM_ASCEND_ENABLE_MATMUL_ALLREDUCEin environ variables, enableMatmulAllReducefusion kernel when tensor parallel is enabled. This feature is supported in A2, and eager mode will get better performance. - Add
VLLM_ASCEND_ENABLE_MOE_ALL2ALL_SEQin environ variables, Whether to enable moe all2all seq, this provides a basic framework on the basis of alltoall for easy expansion.
-
UT coverage reached 76.34% after a batch of prs followed by this rfc: #1298
-
Sequence Parallelism works for Qwen3 MoE. #2209
-
Chinese online document is added now. #1870
Known Issues
- Aclgraph could not work with DP + EP currently, the mainly gap is the number of npu stream that Aclgraph needed to capture graph is not enough. #2229
- There is an accuracy issue on W8A8 dynamic quantized DeepSeek with multistream enabled. This will be fixed in the next release. #2232
- In Qwen3 MoE, SP cannot be incorporated into the Aclgraph. #2246
- MTP not support V1 scheduler currently, will fix it in Q3. #2254
- When running MTP with DP > 1, we need to disable metrics logger due to some issue on vLLM. #2254
- GLM 4.5 model has accuracy problem in long output length scenario.
New Contributors
- @pkking made their first contribution in #1792
- @lianyiibo made their first contribution in #1811
- @nuclearwu made their first contribution in #1867
- @aidoczh made their first contribution in #1870
- @shiyuan680 made their first contribution in #1930
- @ZrBac made their first contribution in #1964
- @Ronald1995 made their first contribution in #1988
- @taoxudonghaha made their first contribution in #1884
- @hongfugui made their first contribution in #1583
- @YuanCheng-coder made their first contribution in #2067
- @Liccol made their first contribution in #2127
- @1024daniel made their first contribution in #2037
- @yangqinghao-cmss made their first contribution in #2121
Full Changelog: v0.9.2rc1...v0.10.0rc1
v0.9.1rc2
This is the 2nd release candidate of v0.9.1 for vLLM Ascend. Please follow the official doc to get started.
Highlights
- MOE and dense w4a8 quantization support now: #1320 #1910 #1275 #1480
- Dynamic EPLB support in #1943
- Disaggregated Prefilling support for V1 Engine and improvement, continued development and stabilization of the disaggregated prefill feature, including performance enhancements and bug fixes for single-machine setups:#1953 #1612 #1361 #1746 #1552 #1801 #2083 #1989
Models improvement:
- DeepSeek DeepSeek DBO support and improvement: #1285 #1291 #1328 #1420 #1445 #1589 #1759 #1827 #2093
- DeepSeek MTP improvement and bugfix: #1214 #943 #1584 #1473 #1294 #1632 #1694 #1840 #2076 #1990 #2019
- Qwen3 MoE support improvement and bugfix around graph mode and DP: #1940 #2006 #1832
- Qwen3 performance improvement around rmsnorm/repo/mlp ops: #1545 #1719 #1726 #1782 #1745
- DeepSeek MLA chunked prefill/graph mode/multistream improvement and bugfix: #1240 #933 #1135 #1311 #1750 #1872 #2170 #1551
- Qwen2.5 VL improvement via mrope/padding mechanism improvement: #1261 #1705 #1929 #2007
- Ray: Fix the device error when using ray and add initialize_cache and improve warning info: #1234 #1501
Graph mode improvement:
- Fix DeepSeek with deepseek with mc2 in #1269
- Fix accuracy problem for deepseek V3/R1 models with torchair graph in long sequence predictions in #1332
- Fix torchair_graph_batch_sizes bug in #1570
- Enable the limit of tp <= 4 for torchair graph mode in #1404
- Fix rope accruracy bug #1887
- Support multistream of shared experts in FusedMoE #997
- Enable kvcache_nz for the decode process in torchair graph mode#1098
- Fix chunked-prefill with torchair case to resolve UnboundLocalError: local variable 'decode_hs_or_q_c' issue in #1378
- Improve shared experts multi-stream perf for w8a8 dynamic. in #1561
- Repair moe error when set multistream. in #1882
- Round up graph batch size to tp size in EP case #1610
- Fix torchair bug when DP is enabled in #1727
- Add extra checking to torchair_graph_config. in #1675
- Fix rope bug in torchair+chunk-prefill scenario in #1693
- torchair_graph bugfix when chunked_prefill is true in #1748
- Improve prefill optimization to support torchair graph mode in #2090
- Fix rank set in DP scenario #1247
- Reset all unused positions to prevent out-of-bounds to resolve GatherV3 bug in #1397
- Remove duplicate multimodal codes in ModelRunner in #1393
- Fix block table shape to resolve accuracy issue in #1297
- Implement primal full graph with limited scenario in #1503
- Restore paged attention kernel in Full Graph for performance in #1677
- Fix DeepSeek OOM issue in extreme
--gpu-memory-utilizationscenario in #1829 - Turn off aclgraph when enabling TorchAir in #2154
Ops improvement:
- add custom ascendc kernel vocabparallelembedding #796
- fix rope sin/cos cache bug in #1267
- Refactoring AscendFusedMoE (#1229) in #1264
- Use fused ops npu_top_k_top_p in sampler #1920
Core:
- Upgrade CANN to 8.2.rc1 in #2036
- Upgrade torch-npu to 2.5.1.post1 in #2135
- Upgrade python to 3.11 in #2136
- Disable quantization in mindie_turbo in #1749
- fix v0 spec decode in #1323
- Enable
ACL_OP_INIT_MODE=1directly only when using V0 spec decode in #1271 - Refactoring forward_context and model_runner_v1 in #1422
- Fix sampling params in #1423
- add a switch for enabling NZ layout in weights and enable NZ for GMM. in #1409
- Resolved bug in ascend_forward_context in #1449 #1554 #1598
- Address PrefillCacheHit state to fix prefix cache accuracy bug in #1492
- Fix load weight error and add new e2e case in #1651
- Optimize the number of rope-related index selections in deepseek. in #1614
- add mc2 mask in #1642
- Fix static EPLB log2phy condition and improve unit test in #1667 #1896 #2003
- add chunk mc2 for prefill in #1703
- Fix mc2 op GroupCoordinator bug in #1711
- Fix the failure to recognize the actual type of quantization i...
v0.9.2rc1
This is the 1st release candidate of v0.9.2 for vLLM Ascend. Please follow the official doc to get started. From this release, V1 engine will be enabled by default, there is no need to set VLLM_USE_V1=1 any more. And this release is the last version to support V0 engine, V0 code will be clean up in the future.
Highlights
- Pooling model works with V1 engine now. You can take a try with Qwen3 embedding model #1359.
- The performance on Atlas 300I series has been improved. #1591
- aclgraph mode works with Moe models now. Currently, only Qwen3 Moe is well tested. #1381
Core
- Ascend PyTorch adapter (torch_npu) has been upgraded to
2.5.1.post1.dev20250619. Don’t forget to update it in your environment. #1347 - The GatherV3 error has been fixed with aclgraph mode. #1416
- W8A8 quantization works on Atlas 300I series now. #1560
- Fix the accuracy problem with deploy models with parallel parameters. #1678
- The pre-built wheel package now requires lower version of glibc. Users can use it by
pip install vllm-ascenddirectly. #1582
Other
- Official doc has been updated for better read experience. For example, more deployment tutorials are added, user/developer docs are updated. More guide will coming soon.
- Fix accuracy problem for deepseek V3/R1 models with torchair graph in long sequence predictions. #1331
- A new env variable
VLLM_ENABLE_FUSED_EXPERTS_ALLGATHER_EPhas been added. It enables the fused allgather-experts kernel for Deepseek V3/R1 models. The default value is0. #1335 - A new env variable
VLLM_ASCEND_ENABLE_TOPK_TOPP_OPTIMIZATIONhas been added to improve the performance of topk-topp sampling. The default value is 0, we'll consider to enable it by default in the future#1732 - A batch of bugs have been fixed for Data Parallelism case #1273 #1322 #1275 #1478
- The DeepSeek performance has been improved. #1194 #1395 #1380
- Ascend scheduler works with prefix cache now. #1446
- DeepSeek now works with prefix cache now. #1498
- Support prompt logprobs to recover ceval accuracy in V1 #1483
Knowissue
New Contributors
- @xleoken made their first contribution in #1357
- @lyj-jjj made their first contribution in #1335
- @sharonyunyun made their first contribution in #1194
- @Pr0Wh1teGivee made their first contribution in #1308
- @leo-pony made their first contribution in #1374
- @zeshengzong made their first contribution in #1452
- @GDzhu01 made their first contribution in #1477
- @Agonixiaoxiao made their first contribution in #1531
- @zhanghw0354 made their first contribution in #1476
- @farawayboat made their first contribution in #1591
- @ZhengWG made their first contribution in #1196
- @wm901115nwpu made their first contribution in #1654
Full Changelog: v0.9.1rc1...v0.9.2rc1