|
| 1 | +# Prefill-Decode Disaggregation Verification (Qwen) |
| 2 | + |
| 3 | +## Getting Start |
| 4 | + |
| 5 | +vLLM-Ascend now supports prefill-decode (PD) disaggregation with EP (Expert Parallel) options. This guide take one-by-one steps to verify these features with constrained resources. |
| 6 | + |
| 7 | +Take the Qwen3-30B-A3B model as an example, use vllm-ascend v0.10.1rc1 (with vLLM v0.10.1.1) on 3 Atlas 800T A2 servers to deploy the "1P2D" architecture. Assume the ip of the prefiller server is 192.0.0.1, and the decoder servers are 192.0.0.2 (decoder 1) and 192.0.0.3 (decoder 2). On each server, use 2 NPUs to deploy one service instance. |
| 8 | + |
| 9 | +## Verify Multi-Node Communication Environment |
| 10 | + |
| 11 | +### Physical Layer Requirements |
| 12 | + |
| 13 | +- The physical machines must be located on the same WLAN, with network connectivity. |
| 14 | +- All NPUs must be interconnected. Intra-node connectivity is via HCCS, and inter-node connectivity is via RDMA. |
| 15 | + |
| 16 | +### Verification Process |
| 17 | + |
| 18 | +1. Single Node Verification: |
| 19 | + |
| 20 | +Execute the following commands on each node in sequence. The results must all be `success` and the status must be `UP`: |
| 21 | + |
| 22 | +```bash |
| 23 | +# Check the remote switch ports |
| 24 | +for i in {0..7}; do hccn_tool -i $i -lldp -g | grep Ifname; done |
| 25 | +# Get the link status of the Ethernet ports (UP or DOWN) |
| 26 | +for i in {0..7}; do hccn_tool -i $i -link -g ; done |
| 27 | +# Check the network health status |
| 28 | +for i in {0..7}; do hccn_tool -i $i -net_health -g ; done |
| 29 | +# View the network detected IP configuration |
| 30 | +for i in {0..7}; do hccn_tool -i $i -netdetect -g ; done |
| 31 | +# View gateway configuration |
| 32 | +for i in {0..7}; do hccn_tool -i $i -gateway -g ; done |
| 33 | +# View NPU network configuration |
| 34 | +cat /etc/hccn.conf |
| 35 | +``` |
| 36 | + |
| 37 | +2. Get NPU IP Addresses |
| 38 | + |
| 39 | +```bash |
| 40 | +for i in {0..7}; do hccn_tool -i $i -ip -g;done |
| 41 | +``` |
| 42 | + |
| 43 | +3. Cross-Node PING Test |
| 44 | + |
| 45 | +```bash |
| 46 | +# Execute on the target node (replace 'x.x.x.x' with actual npu ip address) |
| 47 | +for i in {0..7}; do hccn_tool -i $i -ping -g address x.x.x.x;done |
| 48 | +``` |
| 49 | + |
| 50 | +## Generate Ranktable |
| 51 | + |
| 52 | +The rank table is a JSON file that specifies the mapping of Ascend NPU ranks to nodes. For more details please refer to the [vllm-ascend examples](https://github.com/vllm-project/vllm-ascend/blob/main/examples/disaggregated_prefill_v1/README.md). Execute the following commands for reference. |
| 53 | + |
| 54 | +```shell |
| 55 | +cd vllm-ascend/examples/disaggregate_prefill_v1/ |
| 56 | +bash gen_ranktable.sh --ips <prefiller_node1_local_ip> <prefiller_node2_local_ip> <decoder_node1_local_ip> <decoder_node2_local_ip> \ |
| 57 | + --npus-per-node <npu_clips> --network-card-name <nic_name> --prefill-device-cnt <prefiller_npu_clips> --decode-device-cnt <decode_npu_clips> \ |
| 58 | + [--local-device-ids <id_1>,<id_2>,<id_3>...] |
| 59 | +``` |
| 60 | + |
| 61 | +Assume that we use device 0,1 on the prefiller server node and device 6,7 on both of the decoder server nodes. Take the following commands as an example. (`--local-device-ids` is necessary if you specify certain NPU devices on the local server.) |
| 62 | + |
| 63 | +```shell |
| 64 | +# On the prefiller node |
| 65 | +cd vllm-ascend/examples/disaggregate_prefill_v1/ |
| 66 | +bash gen_ranktable.sh --ips 192.0.0.1 192.0.0.2 192.0.0.3 \ |
| 67 | + --npus-per-node 2 --network-card-name eth0 --prefill-device-cnt 2 --decode-device-cnt 4 --local-device-ids 0,1 |
| 68 | + |
| 69 | +# On the decoder 1 |
| 70 | +cd vllm-ascend/examples/disaggregate_prefill_v1/ |
| 71 | +bash gen_ranktable.sh --ips 192.0.0.1 192.0.0.2 192.0.0.3 \ |
| 72 | + --npus-per-node 2 --network-card-name eth0 --prefill-device-cnt 2 --decode-device-cnt 4 --local-device-ids 6,7 |
| 73 | + |
| 74 | +# On the decoder 2 |
| 75 | +cd vllm-ascend/examples/disaggregate_prefill_v1/ |
| 76 | +bash gen_ranktable.sh --ips 192.0.0.1 192.0.0.2 192.0.0.3 \ |
| 77 | + --npus-per-node 2 --network-card-name eth0 --prefill-device-cnt 2 --decode-device-cnt 4 --local-device-ids 6,7 |
| 78 | +``` |
| 79 | + |
| 80 | +Rank table will generated at /vllm-workspace/vllm-ascend/examples/disaggregate_prefill_v1/ranktable.json |
| 81 | + |
| 82 | +|Parameter | meaning | |
| 83 | +| --- | --- | |
| 84 | +| --ips | Each node's local ip (prefiller nodes should be front of decoder nodes) | |
| 85 | +| --npus-per-node | Each node's npu clips | |
| 86 | +| --network-card-name | The physical machines' NIC | |
| 87 | +|--prefill-device-cnt | Npu clips used for prefill | |
| 88 | +|--decode-device-cnt |Npu clips used for decode | |
| 89 | +|--local-device-ids |Optional. No need if using all devices on the local node. | |
| 90 | + |
| 91 | +## Prefiller / Decoder Deployment |
| 92 | + |
| 93 | +We can run the following scripts to launch a server on the prefiller/decoder node respectively. |
| 94 | + |
| 95 | +:::::{tab-set} |
| 96 | + |
| 97 | +::::{tab-item} Prefiller node |
| 98 | + |
| 99 | +```shell |
| 100 | +export HCCL_IF_IP=192.0.0.1 # node ip |
| 101 | +export GLOO_SOCKET_IFNAME="eth0" # network card name |
| 102 | +export TP_SOCKET_IFNAME="eth0" |
| 103 | +export HCCL_SOCKET_IFNAME="eth0" |
| 104 | +export DISAGGREGATED_PREFILL_RANK_TABLE_PATH="/path/to/your/generated/ranktable.json" |
| 105 | +export OMP_PROC_BIND=false |
| 106 | +export OMP_NUM_THREADS=10 |
| 107 | +export VLLM_USE_V1=1 |
| 108 | + |
| 109 | +vllm serve /model/Qwen3-30B-A3B \ |
| 110 | + --host 0.0.0.0 \ |
| 111 | + --port 13700 \ |
| 112 | + --tensor-parallel-size 2 \ |
| 113 | + --no-enable-prefix-caching \ |
| 114 | + --seed 1024 \ |
| 115 | + --served-model-name qwen3-moe \ |
| 116 | + --max-model-len 6144 \ |
| 117 | + --max-num-batched-tokens 6144 \ |
| 118 | + --trust-remote-code \ |
| 119 | + --gpu-memory-utilization 0.9 \ |
| 120 | + --enable-expert-parallel \ |
| 121 | + --kv-transfer-config \ |
| 122 | + '{"kv_connector": "LLMDataDistCMgrConnector", |
| 123 | + "kv_buffer_device": "npu", |
| 124 | + "kv_role": "kv_producer", |
| 125 | + "kv_parallel_size": 1, |
| 126 | + "kv_port": "20001", |
| 127 | + "engine_id": "0", |
| 128 | + "kv_connector_module_path": "vllm_ascend.distributed.llmdatadist_c_mgr_connector" |
| 129 | + }' \ |
| 130 | + --additional-config \ |
| 131 | + '{"torchair_graph_config": {"enabled":false, "enable_multistream_shared_expert":false}, "ascend_scheduler_config":{"enabled":true, "enable_chunked_prefill":false}}' \ |
| 132 | + --enforce-eager |
| 133 | +``` |
| 134 | + |
| 135 | +:::: |
| 136 | + |
| 137 | +::::{tab-item} Decoder node 1 |
| 138 | + |
| 139 | +```shell |
| 140 | +export HCCL_IF_IP=192.0.0.2 # node ip |
| 141 | +export GLOO_SOCKET_IFNAME="eth0" # network card name |
| 142 | +export TP_SOCKET_IFNAME="eth0" |
| 143 | +export HCCL_SOCKET_IFNAME="eth0" |
| 144 | +export DISAGGREGATED_PREFILL_RANK_TABLE_PATH="/path/to/your/generated/ranktable.json" |
| 145 | +export OMP_PROC_BIND=false |
| 146 | +export OMP_NUM_THREADS=10 |
| 147 | +export VLLM_USE_V1=1 |
| 148 | + |
| 149 | +vllm serve /model/Qwen3-30B-A3B \ |
| 150 | + --host 0.0.0.0 \ |
| 151 | + --port 13700 \ |
| 152 | + --no-enable-prefix-caching \ |
| 153 | + --tensor-parallel-size 2 \ |
| 154 | + --seed 1024 \ |
| 155 | + --served-model-name qwen3-moe \ |
| 156 | + --max-model-len 6144 \ |
| 157 | + --max-num-batched-tokens 6144 \ |
| 158 | + --trust-remote-code \ |
| 159 | + --gpu-memory-utilization 0.9 \ |
| 160 | + --enable-expert-parallel \ |
| 161 | + --kv-transfer-config \ |
| 162 | + '{"kv_connector": "LLMDataDistCMgrConnector", |
| 163 | + "kv_buffer_device": "npu", |
| 164 | + "kv_role": "kv_consumer", |
| 165 | + "kv_parallel_size": 1, |
| 166 | + "kv_port": "20001", |
| 167 | + "engine_id": "0", |
| 168 | + "kv_connector_module_path": "vllm_ascend.distributed.llmdatadist_c_mgr_connector" |
| 169 | + }' \ |
| 170 | + --additional-config \ |
| 171 | + '{"torchair_graph_config": {"enabled":false, "enable_multistream_shared_expert":false}, "ascend_scheduler_config":{"enabled":true, "enable_chunked_prefill":false}}' |
| 172 | +``` |
| 173 | + |
| 174 | +:::: |
| 175 | + |
| 176 | +::::{tab-item} Decoder node 2 |
| 177 | + |
| 178 | +```shell |
| 179 | +export HCCL_IF_IP=192.0.0.3 # node ip |
| 180 | +export GLOO_SOCKET_IFNAME="eth0" # network card name |
| 181 | +export TP_SOCKET_IFNAME="eth0" |
| 182 | +export HCCL_SOCKET_IFNAME="eth0" |
| 183 | +export DISAGGREGATED_PREFILL_RANK_TABLE_PATH="/path/to/your/generated/ranktable.json" |
| 184 | +export OMP_PROC_BIND=false |
| 185 | +export OMP_NUM_THREADS=10 |
| 186 | +export VLLM_USE_V1=1 |
| 187 | + |
| 188 | +vllm serve /model/Qwen3-30B-A3B \ |
| 189 | + --host 0.0.0.0 \ |
| 190 | + --port 13700 \ |
| 191 | + --no-enable-prefix-caching \ |
| 192 | + --tensor-parallel-size 2 \ |
| 193 | + --seed 1024 \ |
| 194 | + --served-model-name qwen3-moe \ |
| 195 | + --max-model-len 6144 \ |
| 196 | + --max-num-batched-tokens 6144 \ |
| 197 | + --trust-remote-code \ |
| 198 | + --gpu-memory-utilization 0.9 \ |
| 199 | + --enable-expert-parallel \ |
| 200 | + --kv-transfer-config \ |
| 201 | + '{"kv_connector": "LLMDataDistCMgrConnector", |
| 202 | + "kv_buffer_device": "npu", |
| 203 | + "kv_role": "kv_consumer", |
| 204 | + "kv_parallel_size": 1, |
| 205 | + "kv_port": "20001", |
| 206 | + "engine_id": "0", |
| 207 | + "kv_connector_module_path": "vllm_ascend.distributed.llmdatadist_c_mgr_connector" |
| 208 | + }' \ |
| 209 | + --additional-config \ |
| 210 | + '{"torchair_graph_config": {"enabled":false, "enable_multistream_shared_expert":false}, "ascend_scheduler_config":{"enabled":true, "enable_chunked_prefill":false}}' |
| 211 | +``` |
| 212 | + |
| 213 | +:::: |
| 214 | + |
| 215 | +::::: |
| 216 | + |
| 217 | +## Example proxy for Deployment |
| 218 | + |
| 219 | +Run a proxy server on the same node with prefiller service instance. You can get the proxy program in the repository's examples: [load\_balance\_proxy\_server\_example.py](https://github.com/vllm-project/vllm-ascend/blob/main/examples/disaggregated_prefill_v1/load_balance_proxy_server_example.py) |
| 220 | + |
| 221 | +```shell |
| 222 | +python load_balance_proxy_server_example.py \ |
| 223 | + --host 192.0.0.1 \ |
| 224 | + --port 8080 \ |
| 225 | + --prefiller-hosts 192.0.0.1 \ |
| 226 | + --prefiller-port 13700 \ |
| 227 | + --decoder-hosts 192.0.0.2 192.0.0.3 \ |
| 228 | + --decoder-ports 13700 13700 |
| 229 | +``` |
| 230 | + |
| 231 | +## Verification |
| 232 | + |
| 233 | +Check service health using the proxy server endpoint. |
| 234 | + |
| 235 | +```shell |
| 236 | +curl http://192.0.0.1:8080/v1/completions \ |
| 237 | + -H "Content-Type: application/json" \ |
| 238 | + -d '{ |
| 239 | + "model": "qwen3-moe", |
| 240 | + "prompt": "Who are you?", |
| 241 | + "max_tokens": 100, |
| 242 | + "temperature": 0 |
| 243 | + }' |
| 244 | +``` |
0 commit comments