Skip to content

Commit a059670

Browse files
iden-kalemajfacebook-github-bot
authored andcommitted
Fix issue with setting of param_groups/defaults/state for the DPOptimizer wrapper (#660)
Summary: Pull Request resolved: #660 Fix for github issue # [649](#649) **Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameters of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters. **Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference. Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters). **Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer. Reviewed By: HuanyuZhang Differential Revision: D60453849 fbshipit-source-id: 2f181986e55d853866e1f8492c4e77a8bc2aabb2
1 parent f1d0e02 commit a059670

File tree

1 file changed

+43
-5
lines changed

1 file changed

+43
-5
lines changed

opacus/optimizers/optimizer.py

Lines changed: 43 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@
1515
from __future__ import annotations
1616

1717
import logging
18+
from collections import defaultdict
1819
from typing import Callable, List, Optional, Union
1920

2021
import torch
@@ -240,10 +241,6 @@ def __init__(
240241
self.step_hook = None
241242
self.generator = generator
242243
self.secure_mode = secure_mode
243-
244-
self.param_groups = self.original_optimizer.param_groups
245-
self.defaults = self.original_optimizer.defaults
246-
self.state = self.original_optimizer.state
247244
self._step_skip_queue = []
248245
self._is_last_step_skipped = False
249246

@@ -376,6 +373,48 @@ def accumulated_iterations(self) -> int:
376373
)
377374
return vals[0]
378375

376+
@property
377+
def param_groups(self) -> List[dict]:
378+
"""
379+
Returns a list containing a dictionary of all parameters managed by the optimizer.
380+
"""
381+
return self.original_optimizer.param_groups
382+
383+
@param_groups.setter
384+
def param_groups(self, param_groups: List[dict]):
385+
"""
386+
Updates the param_groups of the optimizer.
387+
"""
388+
self.original_optimizer.param_groups = param_groups
389+
390+
@property
391+
def state(self) -> defaultdict:
392+
"""
393+
Returns a dictionary holding current optimization state.
394+
"""
395+
return self.original_optimizer.state
396+
397+
@state.setter
398+
def state(self, state: defaultdict):
399+
"""
400+
Updates the state of the optimizer.
401+
"""
402+
self.original_optimizer.state = state
403+
404+
@property
405+
def defaults(self) -> dict:
406+
"""
407+
Returns a dictionary containing default values for optimization.
408+
"""
409+
return self.original_optimizer.defaults
410+
411+
@defaults.setter
412+
def defaults(self, defaults: dict):
413+
"""
414+
Updates the defaults of the optimizer.
415+
"""
416+
self.original_optimizer.defaults = defaults
417+
379418
def attach_step_hook(self, fn: Callable[[DPOptimizer], None]):
380419
"""
381420
Attaches a hook to be executed after gradient clipping/noising, but before the
@@ -386,7 +425,6 @@ def attach_step_hook(self, fn: Callable[[DPOptimizer], None]):
386425
Args:
387426
fn: hook function. Expected signature: ``foo(optim: DPOptimizer)``
388427
"""
389-
390428
self.step_hook = fn
391429

392430
def clip_and_accumulate(self):

0 commit comments

Comments
 (0)