You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fix issue with setting of param_groups/defaults/state for the DPOptimizer wrapper (#660)
Summary:
Pull Request resolved: #660
Fix for github issue # [649](#649)
**Background**: DPOptimizer is a wrapper for the original non-DP Optimizer selected by the user. `param_groups`, `state`, `defaults` are parameters of DPOPtimizer that store all parameters related to the learning algorithm, including privacy-related parameters.
**Issue**: Previously, DPOptimizer passed `param_groups`, `state`, `defaults` simply by reference. Thus another object can update param_groups for the DPOptimizer, while neglecting to update such parameters for the original Optimizer. The issue is reflected e.g., in the LR (learning rate scheduler) where the learning rate looks as if its being updated for the DPOptimizer, but it is not actually updated for the original Optimizer (the one that matters).
**Fix**: In this fix, we use the property decorator to ensure that the 3 parameters remain the same between DPOptimizer and Optimizer.
Reviewed By: HuanyuZhang
Differential Revision: D60453849
fbshipit-source-id: 2f181986e55d853866e1f8492c4e77a8bc2aabb2
0 commit comments