Skip to content

How to use a method from a custom class as the func in ObservationTermCfg #38

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
67815674 opened this issue Oct 13, 2024 · 4 comments

Comments

@67815674
Copy link

67815674 commented Oct 13, 2024

When I try to create a class to manage the observation functions by myself, I encounter an error.

Error executing job with overrides: []
Traceback (most recent call last):
  File "/home/robot/code/IsaacLab/source/extensions/omni.isaac.lab_tasks/omni/isaac/lab_tasks/utils/hydra.py", line 85, in hydra_main
    env_cfg.from_dict(hydra_env_cfg["env"])
  File "/home/robot/code/IsaacLab/source/extensions/omni.isaac.lab/omni/isaac/lab/utils/configclass.py", line 140, in _update_class_from_dict
    update_class_from_dict(obj, data, _ns="")
  File "/home/robot/code/IsaacLab/source/extensions/omni.isaac.lab/omni/isaac/lab/utils/dict.py", line 87, in update_class_from_dict
    update_class_from_dict(obj_mem, value, _ns=key_ns)
  File "/home/robot/code/IsaacLab/source/extensions/omni.isaac.lab/omni/isaac/lab/utils/dict.py", line 87, in update_class_from_dict
    update_class_from_dict(obj_mem, value, _ns=key_ns)
  File "/home/robot/code/IsaacLab/source/extensions/omni.isaac.lab/omni/isaac/lab/utils/dict.py", line 110, in update_class_from_dict
    value = string_to_callable(value)
  File "/home/robot/code/IsaacLab/source/extensions/omni.isaac.lab/omni/isaac/lab/utils/string.py", line 158, in string_to_callable
    callable_object = getattr(mod, attr_name)
AttributeError: module 'robot_lab.tasks.locomotion.walker.mdp.observations' has no attribute 'get_gait_phase'

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
@67815674
Copy link
Author

67815674 commented Oct 13, 2024

These are part of the code for my custom class```
class ObsTermAppends:

def __init__(self):
    self.cycle_time = 0.64
    self.target_joint_pos_scale = 0.2
    self.num_envs = 4096
    self.N = 10

def get_gait_phase(self, env : ManagerBasedRLEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")):
    # return float mask 1 is stance, 0 is swing
    asset: Articulation = env.scene[asset_cfg.name]
    self.ref_dof_pos = torch.zeros_like(asset.data.joint_pos[:, asset_cfg.joint_ids])
    self.phase = env.episode_length_buf * env.step_dt / self.cycle_time
    self.stance_mask = torch.zeros((env.num_envs, 2))
    self.compute_ref_state
    sin_pos = torch.sin(2 * torch.pi * self.phase)
    # Add double support phase
    self.stance_mask.zero_()
    # left foot stance
    self.stance_mask[:, 0] = sin_pos >= 0
    # right foot stance
    self.stance_mask[:, 1] = sin_pos < 0
    # Double support phase
    self.stance_mask[torch.abs(sin_pos) < 0.1] = 1
    return self.stance_mask

  def base_pos_z(self, env : ManagerBasedRLEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")) -> torch.Tensor:
      """Root height in the simulation world frame."""
      # extract the used quantities (to enable type-hinting)
      asset: Articulation = env.scene[asset_cfg.name]
      self.base_pos_z = asset.data.root_pos_w[:, 2].unsqueeze(-1)
      return self.base_pos_z

  def base_lin_vel(self, env : ManagerBasedRLEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")) -> torch.Tensor:
      """Root linear velocity in the asset's root frame."""
      # extract the used quantities (to enable type-hinting)
      asset: Articulation = env.scene[asset_cfg.name]
      self.base_lin_vel = asset.data.root_lin_vel_b
      return self.base_lin_vel
  
  def base_ang_vel(self, env : ManagerBasedRLEnv, asset_cfg: SceneEntityCfg = SceneEntityCfg("robot")) -> torch.Tensor:
      """Root angular velocity in the asset's root frame."""
      # extract the used quantities (to enable type-hinting)
      asset: Articulation = env.scene[asset_cfg.name]
      self.base_ang_vel = asset.data.root_ang_vel_b
      return self.base_ang_vel

@67815674
Copy link
Author

67815674 commented Oct 13, 2024

and,I am using these functions like this:

class ObservationsCfg:
    """Observation specifications for the MDP."""
    obsAppends = ObsTermAppends()

    get_gait_phase = ObsTerm(
        func=obsAppends.get_gait_phase,
    )
    diff_ref_pos = ObsTerm(
        func=obsAppends.diff_ref_pos,
    )
    # observation terms (order preserved)
    base_pos_z = ObsTerm(
        func=obsAppends.base_lin_vel,
        noise=Unoise(n_min=-0.1, n_max=0.1),
    )
    base_lin_vel = ObsTerm(
        func=obsAppends.base_lin_vel,
        noise=Unoise(n_min=-0.1, n_max=0.1),
    )
    base_ang_vel = ObsTerm(
        func=obsAppends.base_ang_vel,
        noise=Unoise(n_min=-0.2, n_max=0.2),
    )

@fan-ziqi
Copy link
Contributor

fan-ziqi commented Jan 8, 2025

Hi, you can write like this

Add a new obs func in robot_lab.tasks.locomotion.velocity.mdp

def phase(env: ManagerBasedRLEnv) -> torch.Tensor:
    # Compute phase here
    phase_tensor = xxx
    return phase_tensor

Then import and use it

import robot_lab.tasks.locomotion.velocity.mdp as mdp
from robot_lab.tasks.locomotion.velocity.velocity_env_cfg import ObservationsCfg

@configclass
class XXXObservationsCfg(ObservationsCfg):
    @configclass
    class PolicyCfg(ObservationsCfg.PolicyCfg):
        phase = ObsTerm(func=mdp.phase, scale=1.0)

        def __post_init__(self):
            super().__post_init__()

    policy: PolicyCfg = PolicyCfg()

@MissWu-615
Copy link

I have a similar question, I defined my own ObservationsCfg, ActionCfg, EventCfg, CommandsCfg, beacuse I didn't want to be covered if I used the configuration form robot_lab.tasks.locomotion.velocity.velocity_env_cfg, but when I trained, there is an error:Could not find git repository in /home/wu/miniconda3/envs/isaaclab/lib/python3.10/site-packages/rsl_rl/init.py. Skipping.
Storing git diff for 'IsaacLab' in: /home/wu/Downloads/IsaacLab/logs/rsl_rl/tinker_flat/2025-03-26_17-48-44/git/IsaacLab.diff
Error executing job with overrides: []
Traceback (most recent call last):
File "/home/wu/Downloads/IsaacLab/scripts/reinforcement_learning/rsl_rl/train.py", line 153, in main()
File "/home/wu/Downloads/IsaacLab/source/isaaclab_tasks/isaaclab_tasks/utils/hydra.py", line 104, in wrapper hydra_main()
File "/home/wu/miniconda3/envs/isaaclab/lib/python3.10/site-packages/hydra/main.py", line 94, in decorated_main _run_hydra(
File "/home/wu/miniconda3/envs/isaaclab/lib/python3.10/site-packages/hydra/_internal/utils.py", line 394, in _run_hydra
_run_app(
File "/home/wu/miniconda3/envs/isaaclab/lib/python3.10/site-packages/hydra/_internal/utils.py", line 457, in _run_app
run_and_report(
File "/home/wu/miniconda3/envs/isaaclab/lib/python3.10/site-packages/hydra/_internal/utils.py", line 223, in run_and_report
raise ex
File "/home/wu/miniconda3/envs/isaaclab/lib/python3.10/site-packages/hydra/_internal/utils.py", line 220, in run_and_report
return func()
File "/home/wu/miniconda3/envs/isaaclab/lib/python3.10/site-packages/hydra/_internal/utils.py", line 458, in
lambda: hydra.run(
File "/home/wu/miniconda3/envs/isaaclab/lib/python3.10/site-packages/hydra/_internal/hydra.py", line 132, in run
_ = ret.return_value
File "/home/wu/miniconda3/envs/isaaclab/lib/python3.10/site-packages/hydra/core/utils.py", line 260, in return_value
raise self._return_value
File "/home/wu/miniconda3/envs/isaaclab/lib/python3.10/site-packages/hydra/core/utils.py", line 186, in run_job
ret.return_value = task_function(task_cfg)
File "/home/wu/Downloads/IsaacLab/source/isaaclab_tasks/isaaclab_tasks/utils/hydra.py", line 101, in hydra_main
func(env_cfg, agent_cfg, *args, **kwargs)
File "/home/wu/Downloads/IsaacLab/scripts/reinforcement_learning/rsl_rl/train.py", line 145, in main
runner.learn(num_learning_iterations=agent_cfg.max_iterations, init_at_random_ep_len=True)
File "/home/wu/miniconda3/envs/isaaclab/lib/python3.10/site-packages/rsl_rl/runners/on_policy_runner.py", line 113, in learn
actions = self.alg.act(obs, critic_obs)
File "/home/wu/miniconda3/envs/isaaclab/lib/python3.10/site-packages/rsl_rl/algorithms/ppo.py", line 75, in act
self.transition.actions = self.actor_critic.act(obs).detach()
File "/home/wu/miniconda3/envs/isaaclab/lib/python3.10/site-packages/rsl_rl/modules/actor_critic.py", line 105, in act
self.update_distribution(observations)
File "/home/wu/miniconda3/envs/isaaclab/lib/python3.10/site-packages/rsl_rl/modules/actor_critic.py", line 102, in update_distribution
self.distribution = Normal(mean, mean * 0.0 + self.std)
File "/home/wu/Downloads/isaacsim/exts/omni.isaac.ml_archive/pip_prebundle/torch/distributions/normal.py", line 59, in init
super().init(batch_shape, validate_args=validate_args)
File "/home/wu/Downloads/isaacsim/exts/omni.isaac.ml_archive/pip_prebundle/torch/distributions/distribution.py", line 71, in init
raise ValueError(
ValueError: Expected parameter loc (Tensor of shape (500, 10)) of distribution Normal(loc: torch.Size([500, 10]), scale: torch.Size([500, 10])) to satisfy the constraint Real(), but found invalid values:
tensor([[ 0.0005, -0.1112, 0.1013, ..., 0.0517, -0.0309, -0.1245],
[-0.0250, -0.1002, 0.1011, ..., 0.0629, -0.0326, -0.1150],
[ 0.0355, -0.0955, 0.0909, ..., 0.0599, -0.0542, -0.1233],
...,
[-0.0105, -0.0870, 0.1079, ..., 0.0691, 0.0345, -0.1311],
[ 0.0274, -0.0662, 0.0939, ..., 0.0541, -0.0115, -0.0833],
[ 0.0296, -0.0955, 0.0901, ..., 0.0424, -0.0467, -0.1041]],
device='cuda:0') Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants