Skip to content

Commit 25e8bd2

Browse files
author
Vincent Moens
authored
[Deprecation] Deprecate default num_cells in MLP (#2395)
1 parent 012cf74 commit 25e8bd2

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

47 files changed

+110
-97
lines changed

docs/source/reference/collectors.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ worker) may also impact the memory management. The key parameters to control are
4545
:obj:`devices` which controls the execution devices (ie the device of the policy)
4646
and :obj:`storing_device` which will control the device where the environment and
4747
data are stored during a rollout. A good heuristic is usually to use the same device
48-
for storage and compute, which is the default behaviour when only the `devices` argument
48+
for storage and compute, which is the default behavior when only the `devices` argument
4949
is being passed.
5050

5151
Besides those compute parameters, users may choose to configure the following parameters:

docs/source/reference/data.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -171,7 +171,7 @@ using the following components:
171171
Storage choice is very influential on replay buffer sampling latency, especially
172172
in distributed reinforcement learning settings with larger data volumes.
173173
:class:`~torchrl.data.replay_buffers.storages.LazyMemmapStorage` is highly
174-
advised in distributed settings with shared storage due to the lower serialisation
174+
advised in distributed settings with shared storage due to the lower serialization
175175
cost of MemoryMappedTensors as well as the ability to specify file storage locations
176176
for improved node failure recovery.
177177
The following mean sampling latency improvements over using :class:`~torchrl.data.replay_buffers.ListStorage`

docs/source/reference/envs.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -318,7 +318,7 @@ have on an environment returning zeros after reset:
318318

319319
We also offer the :class:`~.SerialEnv` class that enjoys the exact same API but is executed
320320
serially. This is mostly useful for testing purposes, when one wants to assess the
321-
behaviour of a :class:`~.ParallelEnv` without launching the subprocesses.
321+
behavior of a :class:`~.ParallelEnv` without launching the subprocesses.
322322

323323
In addition to :class:`~.ParallelEnv`, which offers process-based parallelism, we also provide a way to create
324324
multithreaded environments with :obj:`~.MultiThreadedEnv`. This class uses `EnvPool <https://github.com/sail-sg/envpool>`_
@@ -499,7 +499,7 @@ current episode.
499499
To handle these cases, torchrl provides a :class:`~torchrl.envs.AutoResetTransform` that will copy the observations
500500
that result from the call to `step` to the next `reset` and skip the calls to `reset` during rollouts (in both
501501
:meth:`~torchrl.envs.EnvBase.rollout` and :class:`~torchrl.collectors.SyncDataCollector` iterations).
502-
This transform class also provides a fine-grained control over the behaviour to be adopted for the invalid observations,
502+
This transform class also provides a fine-grained control over the behavior to be adopted for the invalid observations,
503503
which can be masked with `"nan"` or any other values, or not masked at all.
504504

505505
To tell torchrl that an environment is auto-resetting, it is sufficient to provide an ``auto_reset`` argument
@@ -755,10 +755,10 @@ registered buffers:
755755
>>> TransformedEnv(base_env, third_transform.clone()) # works
756756

757757
On a single process or if the buffers are placed in shared memory, this will
758-
result in all the clone transforms to keep the same behaviour even if the
758+
result in all the clone transforms to keep the same behavior even if the
759759
buffers are changed in place (which is what will happen with the :class:`CatFrames`
760760
transform, for instance). In distributed settings, this may not hold and one
761-
should be careful about the expected behaviour of the cloned transforms in this
761+
should be careful about the expected behavior of the cloned transforms in this
762762
context.
763763
Finally, notice that indexing multiple transforms from a :class:`Compose` transform
764764
may also result in loss of parenthood for these transforms: the reason is that
@@ -1061,7 +1061,7 @@ the current gym backend or any of its modules:
10611061
Another tool that comes in handy with gym and other external dependencies is
10621062
the :class:`torchrl._utils.implement_for` class. Decorating a function
10631063
with ``@implement_for`` will tell torchrl that, depending on the version
1064-
indicated, a specific behaviour is to be expected. This allows us to easily
1064+
indicated, a specific behavior is to be expected. This allows us to easily
10651065
support multiple versions of gym without requiring any effort from the user side.
10661066
For example, considering that our virtual environment has the v0.26.2 installed,
10671067
the following function will return ``1`` when queried:

docs/source/reference/modules.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ Exploration wrappers
6262

6363
To efficiently explore the environment, TorchRL proposes a series of wrappers
6464
that will override the action sampled by the policy by a noisier version.
65-
Their behaviour is controlled by :func:`~torchrl.envs.utils.exploration_mode`:
65+
Their behavior is controlled by :func:`~torchrl.envs.utils.exploration_mode`:
6666
if the exploration is set to ``"random"``, the exploration is active. In all
6767
other cases, the action written in the tensordict is simply the network output.
6868

examples/distributed/replay_buffers/distributed_replay_buffer.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -150,8 +150,8 @@ def _create_and_launch_data_collectors(self) -> None:
150150

151151
class ReplayBufferNode(RemoteTensorDictReplayBuffer):
152152
"""Experience replay buffer node that is capable of accepting remote connections. Being a `RemoteTensorDictReplayBuffer`
153-
means all of it's public methods are remotely invokable using `torch.rpc`.
154-
Using a LazyMemmapStorage is highly advised in distributed settings with shared storage due to the lower serialisation
153+
means all of its public methods are remotely invokable using `torch.rpc`.
154+
Using a LazyMemmapStorage is highly advised in distributed settings with shared storage due to the lower serialization
155155
cost of MemoryMappedTensors as well as the ability to specify file storage locations which can improve ability to recover from node failures.
156156
157157
Args:

sota-implementations/redq/redq.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ def main(cfg: "DictConfig"): # noqa: F821
159159
use_env_creator=False,
160160
)()
161161
if isinstance(create_env_fn, ParallelEnv):
162-
raise NotImplementedError("This behaviour is deprecated")
162+
raise NotImplementedError("This behavior is deprecated")
163163
elif isinstance(create_env_fn, EnvCreator):
164164
recorder.transform[1:].load_state_dict(
165165
get_norm_state_dict(create_env_fn()), strict=False

test/_utils_internal.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ def HALFCHEETAH_VERSIONED():
5656

5757
def PONG_VERSIONED():
5858
# load gym
59-
# Gymnasium says that the ale_py behaviour changes from 1.0
59+
# Gymnasium says that the ale_py behavior changes from 1.0
6060
# but with python 3.12 it is already the case with 0.29.1
6161
try:
6262
import ale_py # noqa
@@ -70,7 +70,7 @@ def PONG_VERSIONED():
7070

7171
def BREAKOUT_VERSIONED():
7272
# load gym
73-
# Gymnasium says that the ale_py behaviour changes from 1.0
73+
# Gymnasium says that the ale_py behavior changes from 1.0
7474
# but with python 3.12 it is already the case with 0.29.1
7575
try:
7676
import ale_py # noqa

test/test_cost.py

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -149,7 +149,12 @@
149149

150150

151151
# Capture all warnings
152-
pytestmark = pytest.mark.filterwarnings("error")
152+
pytestmark = [
153+
pytest.mark.filterwarnings("error"),
154+
pytest.mark.filterwarnings(
155+
"ignore:The current behavior of MLP when not providing `num_cells` is that the number"
156+
),
157+
]
153158

154159

155160
class _check_td_steady:

test/test_libs.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3682,7 +3682,7 @@ class TestRoboHive:
36823682
# The other option would be not to use parametrize but that also
36833683
# means less informative error trace stacks.
36843684
# In the CI, robohive should not coexist with other libs so that's fine.
3685-
# Robohive logging behaviour can be controlled via ROBOHIVE_VERBOSITY=ALL/INFO/(WARN)/ERROR/ONCE/ALWAYS/SILENT
3685+
# Robohive logging behavior can be controlled via ROBOHIVE_VERBOSITY=ALL/INFO/(WARN)/ERROR/ONCE/ALWAYS/SILENT
36863686
@pytest.mark.parametrize("from_pixels", [False, True])
36873687
@pytest.mark.parametrize("from_depths", [False, True])
36883688
@pytest.mark.parametrize("envname", RoboHiveEnv.available_envs)

test/test_transforms.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -146,7 +146,7 @@ class TransformBase:
146146
147147
We ask for every new transform tests to be coded following this minimum requirement class.
148148
149-
Of course, specific behaviours can also be tested separately.
149+
Of course, specific behaviors can also be tested separately.
150150
151151
If your transform identifies an issue with the EnvBase or _BatchedEnv abstraction(s),
152152
this needs to be corrected independently.

0 commit comments

Comments
 (0)