Skip to content

Commit c0b8623

Browse files
committed
Update
[ghstack-poisoned]
1 parent e46ce79 commit c0b8623

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

sota-implementations/grpo/grpo_utils.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -162,9 +162,9 @@ def get_ref_model(
162162
max_memory = {}
163163
for i in range(torch.cuda.device_count()):
164164
if i in ref_devices:
165-
max_memory[f"cuda:{i}"] = "24GiB" # Allow max memory for devices we want to use
165+
max_memory[i] = "24GiB" # Allow max memory for devices we want to use
166166
else:
167-
max_memory[f"cuda:{i}"] = "0GiB" # No memory for other devices
167+
max_memory[i] = "0GiB" # No memory for other devices
168168
max_memory["cpu"] = "24GiB" # Allow CPU memory as fallback
169169

170170
# Let HF handle distribution with max_memory

0 commit comments

Comments
 (0)