-
Notifications
You must be signed in to change notification settings - Fork 4.6k
Description
Describe the bug
Training with ZeRO 2 sharding, bfloat16 parameters, sequence parallelism, and no torch autocast results in a key error crash in report_ipg_memory_usage
.
To Reproduce
Run training as above.
Expected behavior
No key error.
ds_report output
[2025-09-29 14:19:28,705] [INFO] [real_accelerator.py:260:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-09-29 14:19:30,748] [INFO] [logging.py:107:log_dist] [Rank -1] [TorchCheckpointEngine] Initialized with serialization = False
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
dc ..................... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
[WARNING] FP Quantizer is using an untested triton version (3.4.0), only 2.3.(0, 1) and 3.0.0 are known to be compatible with these kernels
fp_quantizer ........... [NO] ....... [NO]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
[WARNING] gds requires the dev libaio .so object and headers but these were not found.
[WARNING] gds: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
gds .................... [NO] ....... [NO]
transformer_inference .. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.8
[WARNING] using untested triton version (3.4.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
utils .................. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/environment/.venv/lib/python3.12/site-packages/torch']
torch version .................... 2.8.0+cu129
deepspeed install path ........... ['/environment/.venv/lib/python3.12/site-packages/deepspeed']
deepspeed info ................... 0.17.5, unknown, unknown
torch cuda version ............... 12.9
torch hip version ................ None
nvcc version ..................... 12.9
deepspeed wheel compiled w. ...... torch 0.0, cuda 0.0
shared memory (/dev/shm) size .... 1.57 TB
System info (please complete the following information):
- 8x H200
- Python version 3.12.10
Launcher context
Something else.
Docker context
Custom docker image that I can't share.
Additional context
report_ipg_memory_usage
is passed the dtype
of the parameters, which is bfloat16
, and expects to find this key in ipg_buckets
. It doesn't, because ipg_buckets
was initialized with float32
, stemming in the particular case examined from SEQ_PARALLEL_COMMUNICATION_DATA_TYPE_DEFAULT == "fp32"
.
Note: There is inoperative code to alter this default depending on the NCCL version: the result is overridden to be fp32
in any case.
Note: It appears with torch autocast enabled, a different initialisation path might avoid this crash.