Skip to content

[Bug Report] Cannot modify network architecture with hydra if agent config is a configclass #2456

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
2 of 3 tasks
ozhanozen opened this issue May 9, 2025 · 4 comments · May be fixed by #2511
Open
2 of 3 tasks
Labels
bug Something isn't working

Comments

@ozhanozen
Copy link
Contributor

Describe the bug

I would like to modify the agent's neural network architecture using Hydra on the go, using command line arguments, e.g.:

isaaclab -p scripts/reinforcement_learning/rsl_rl/train.py --task my_task 'agent.policy.actor_hidden_dims=[512, 256, 128, 64]'

However, if the agent's configuration is using the configclass (as in the rsl_rl), and the number of layers within the default config does not match the provided Hydra arguments, I encounter an error, specifically, I get the following exception:

File "/workspace/isaaclab/source/isaaclab/isaaclab/utils/dict-py", line 103,
in update_class_from_dict
raise ValueError(
ValueError: [Configl: Incorrect length under namespace: /policy/actor_hidden_dims. Expected: 5,
Received: 4.

Is there any workaround for this?

Thanks in advance!

System Info

  • Isaac Sim Version: 4.5
  • OS: Ubuntu 22.04
  • GPU: 4090 RTX
  • CUDA: 12.2
  • GPU Driver: 535.129.03

Checklist

  • I have checked that there is no similar issue in the repo (required)
  • I have checked that the issue is not in running Isaac Sim itself and is related to the repo

Acceptance Criteria

  • To be able to use Hydra to modify network structure
@RandomOakForest
Copy link
Collaborator

Thanks for posting this. It's likely we have seen this before. Could you post a fuller error log? The team will review this. Thanks.

@RandomOakForest RandomOakForest added the bug Something isn't working label May 10, 2025
@ozhanozen
Copy link
Contributor Author

Thanks for posting this. It's likely we have seen this before. Could you post a fuller error log? The team will review this. Thanks.

Thanks for your reply. A screenshot with the full error log is attached:

Image

My understanding is that while updating the default config with Hydra arguments, the update_class_from_dict() function is called. This function considers that if the lengths of the Iterables do not match between the default config and Hydra arguments, this is an error, raising an exception. In other words, it is a deliberate design choice to raise this flag rather than a bug. However, this design choice results in the aforementioned limitation.

As a quick workaround, I tried to add the following exception for flat lists:

# If flat list, replace wholesale
if all(not isinstance(el, dict) for el in value):
   out_val = tuple(value) if isinstance(obj_mem, tuple) else value
   if isinstance(obj, dict):
      obj[key] = out_val
   else:
      setattr(obj, key, out_val)
   continue

right before this length check that cause the limitation:

if len(obj_mem) != len(value) and obj_mem is not None:
raise ValueError(
f"[Config]: Incorrect length under namespace: {key_ns}."
f" Expected: {len(obj_mem)}, Received: {len(value)}."
)

This seems to solve the issue/limitation. However, I am not sure if this would break something else, somewhere else. If you could provide feedback on this for a robust solution, I could create a PR accordingly.

@RandomOakForest
Copy link
Collaborator

Thanks for submitting this. Would you like to submit a PR that we could review?

@ozhanozen
Copy link
Contributor Author

Thanks for submitting this. Would you like to submit a PR that we could review?

Hi @RandomOakForest , I have done so.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants